Test Report: QEMU_macOS 19355

                    
                      6d23947514fd7a389789fed180382829b6444229:2024-07-31:35588
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.67
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.03
36 TestAddons/Setup 10.34
37 TestCertOptions 10.13
38 TestCertExpiration 197.66
39 TestDockerFlags 12.31
40 TestForceSystemdFlag 12.46
41 TestForceSystemdEnv 10.22
47 TestErrorSpam/setup 9.98
56 TestFunctional/serial/StartWithProxy 9.89
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
70 TestFunctional/serial/MinikubeKubectlCmd 0.74
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.98
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.12
86 TestFunctional/parallel/ServiceCmdConnect 0.13
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.11
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.28
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.04
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 107.48
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.29
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 25.72
150 TestMultiControlPlane/serial/StartCluster 10.02
151 TestMultiControlPlane/serial/DeployApp 79.26
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.07
159 TestMultiControlPlane/serial/RestartSecondaryNode 44.17
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.07
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.87
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.07
164 TestMultiControlPlane/serial/StopCluster 3.94
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.07
171 TestImageBuild/serial/Setup 9.91
174 TestJSONOutput/start/Command 9.81
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.19
206 TestMountStart/serial/StartWithMountFirst 10.06
209 TestMultiNode/serial/FreshStart2Nodes 9.9
210 TestMultiNode/serial/DeployApp2Nodes 116.24
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 45.17
218 TestMultiNode/serial/RestartKeepsNodes 8.87
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 2.08
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 20.38
226 TestPreload 10.18
228 TestScheduledStopUnix 10.19
229 TestSkaffold 12.26
232 TestRunningBinaryUpgrade 625
234 TestKubernetesUpgrade 18.6
248 TestStoppedBinaryUpgrade/Upgrade 599.67
258 TestPause/serial/Start 9.91
261 TestNoKubernetes/serial/StartWithK8s 11.22
262 TestNoKubernetes/serial/StartWithStopK8s 7.44
263 TestNoKubernetes/serial/Start 7.46
264 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.94
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.63
269 TestNoKubernetes/serial/StartNoArgs 5.4
271 TestNetworkPlugins/group/auto/Start 9.87
272 TestNetworkPlugins/group/flannel/Start 9.96
273 TestNetworkPlugins/group/kindnet/Start 9.87
274 TestNetworkPlugins/group/enable-default-cni/Start 9.82
275 TestNetworkPlugins/group/bridge/Start 9.79
276 TestNetworkPlugins/group/kubenet/Start 9.84
277 TestNetworkPlugins/group/custom-flannel/Start 9.81
278 TestNetworkPlugins/group/calico/Start 9.93
279 TestNetworkPlugins/group/false/Start 9.94
281 TestStartStop/group/old-k8s-version/serial/FirstStart 9.84
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 10.04
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
297 TestStartStop/group/no-preload/serial/SecondStart 5.25
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 9.99
304 TestStartStop/group/embed-certs/serial/DeployApp 0.09
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
308 TestStartStop/group/embed-certs/serial/SecondStart 7.44
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.96
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
314 TestStartStop/group/embed-certs/serial/Pause 0.1
316 TestStartStop/group/newest-cni/serial/FirstStart 9.96
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.89
326 TestStartStop/group/newest-cni/serial/SecondStart 5.25
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (14.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-203000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-203000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.665762s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6495abc-9e15-4757-8be3-4d5d293246c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-203000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"96206e02-7f0b-4803-9e51-40e6af31ab07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19355"}}
	{"specversion":"1.0","id":"dc3ad05f-7fa1-4a82-a7b0-cad13548483d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig"}}
	{"specversion":"1.0","id":"1082d329-6feb-48a4-ad14-17d776b9949a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b5e38848-3edf-45cf-a099-cf53909af676","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9c8f03a0-ec5f-4ced-8aae-c368732c315c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube"}}
	{"specversion":"1.0","id":"a2601157-2f90-47eb-af35-269f6f3c4339","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"ec50d034-a200-42fa-b46c-2fc265b0fb75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1db3783a-da92-4ee9-8453-44b79e950d5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"e8125af0-c4d3-411a-a9be-0faa7b789a33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2875cfb5-ed78-4e41-b92a-7fe8304b9d30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-203000\" primary control-plane node in \"download-only-203000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5ab80c28-e5ca-471b-9869-00d0fe61df45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae3d87d4-1a85-4e37-86be-09266992c3a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60] Decompressors:map[bz2:0x1400000ee90 gz:0x1400000ee98 tar:0x1400000ee40 tar.bz2:0x1400000ee50 tar.gz:0x1400000ee60 tar.xz:0x1400000ee70 tar.zst:0x1400000ee80 tbz2:0x1400000ee50 tgz:0x14
00000ee60 txz:0x1400000ee70 tzst:0x1400000ee80 xz:0x1400000eea0 zip:0x1400000eeb0 zst:0x1400000eea8] Getters:map[file:0x1400090e6d0 http:0x1400070a370 https:0x1400070a3c0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"51adfd64-beea-4273-aedb-be03822fdcfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:56.318827    6988 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:56.318973    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:56.318977    6988 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:56.318979    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:56.319113    6988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	W0731 12:13:56.319201    6988 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19355-6494/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19355-6494/.minikube/config/config.json: no such file or directory
	I0731 12:13:56.320595    6988 out.go:298] Setting JSON to true
	I0731 12:13:56.338196    6988 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4399,"bootTime":1722448837,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:13:56.338267    6988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:13:56.342474    6988 out.go:97] [download-only-203000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:13:56.342590    6988 notify.go:220] Checking for updates...
	W0731 12:13:56.342642    6988 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 12:13:56.345509    6988 out.go:169] MINIKUBE_LOCATION=19355
	I0731 12:13:56.348559    6988 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:13:56.353457    6988 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:13:56.356479    6988 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:13:56.359452    6988 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	W0731 12:13:56.365438    6988 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 12:13:56.365651    6988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:13:56.368455    6988 out.go:97] Using the qemu2 driver based on user configuration
	I0731 12:13:56.368475    6988 start.go:297] selected driver: qemu2
	I0731 12:13:56.368489    6988 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:13:56.368556    6988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:13:56.371382    6988 out.go:169] Automatically selected the socket_vmnet network
	I0731 12:13:56.376768    6988 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 12:13:56.376866    6988 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:13:56.376932    6988 cni.go:84] Creating CNI manager for ""
	I0731 12:13:56.376949    6988 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:13:56.376997    6988 start.go:340] cluster config:
	{Name:download-only-203000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-203000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:56.381001    6988 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:56.384503    6988 out.go:97] Downloading VM boot image ...
	I0731 12:13:56.384522    6988 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso
	I0731 12:14:02.974785    6988 out.go:97] Starting "download-only-203000" primary control-plane node in "download-only-203000" cluster
	I0731 12:14:02.974811    6988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:14:03.032310    6988 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:14:03.032317    6988 cache.go:56] Caching tarball of preloaded images
	I0731 12:14:03.032465    6988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:14:03.037553    6988 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 12:14:03.037559    6988 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:03.127820    6988 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:14:09.805636    6988 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:09.805790    6988 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:10.500410    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:14:10.500603    6988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/download-only-203000/config.json ...
	I0731 12:14:10.500620    6988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/download-only-203000/config.json: {Name:mkb14538d37fd692acda6d82852c4f68bca74e6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:14:10.500864    6988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:14:10.501082    6988 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0731 12:14:10.903793    6988 out.go:169] 
	W0731 12:14:10.910994    6988 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60] Decompressors:map[bz2:0x1400000ee90 gz:0x1400000ee98 tar:0x1400000ee40 tar.bz2:0x1400000ee50 tar.gz:0x1400000ee60 tar.xz:0x1400000ee70 tar.zst:0x1400000ee80 tbz2:0x1400000ee50 tgz:0x1400000ee60 txz:0x1400000ee70 tzst:0x1400000ee80 xz:0x1400000eea0 zip:0x1400000eeb0 zst:0x1400000eea8] Getters:map[file:0x1400090e6d0 http:0x1400070a370 https:0x1400070a3c0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 12:14:10.911023    6988 out_reason.go:110] 
	W0731 12:14:10.917863    6988 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:14:10.921821    6988 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-203000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (14.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-917000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-917000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.868342875s)

                                                
                                                
-- stdout --
	* [offline-docker-917000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-917000" primary control-plane node in "offline-docker-917000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-917000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:25:29.958851    8432 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:25:29.958984    8432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:29.958988    8432 out.go:304] Setting ErrFile to fd 2...
	I0731 12:25:29.958990    8432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:29.959143    8432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:25:29.960430    8432 out.go:298] Setting JSON to false
	I0731 12:25:29.977771    8432 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5092,"bootTime":1722448837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:25:29.977867    8432 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:25:29.983821    8432 out.go:177] * [offline-docker-917000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:25:29.987865    8432 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:25:29.987912    8432 notify.go:220] Checking for updates...
	I0731 12:25:29.993889    8432 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:25:29.996765    8432 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:25:29.999830    8432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:25:30.002827    8432 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:25:30.005843    8432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:25:30.009223    8432 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:25:30.009286    8432 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:25:30.012795    8432 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:25:30.019821    8432 start.go:297] selected driver: qemu2
	I0731 12:25:30.019833    8432 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:25:30.019840    8432 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:25:30.021735    8432 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:25:30.024788    8432 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:25:30.027877    8432 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:25:30.027893    8432 cni.go:84] Creating CNI manager for ""
	I0731 12:25:30.027899    8432 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:25:30.027902    8432 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:25:30.027941    8432 start.go:340] cluster config:
	{Name:offline-docker-917000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:25:30.031474    8432 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:30.038851    8432 out.go:177] * Starting "offline-docker-917000" primary control-plane node in "offline-docker-917000" cluster
	I0731 12:25:30.042810    8432 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:25:30.042842    8432 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:25:30.042852    8432 cache.go:56] Caching tarball of preloaded images
	I0731 12:25:30.042926    8432 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:25:30.042931    8432 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:25:30.042995    8432 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/offline-docker-917000/config.json ...
	I0731 12:25:30.043005    8432 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/offline-docker-917000/config.json: {Name:mk9b1b42849fe2e9680db5679a3d6d7aba92a251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:25:30.043281    8432 start.go:360] acquireMachinesLock for offline-docker-917000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:25:30.043315    8432 start.go:364] duration metric: took 25µs to acquireMachinesLock for "offline-docker-917000"
	I0731 12:25:30.043326    8432 start.go:93] Provisioning new machine with config: &{Name:offline-docker-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:25:30.043370    8432 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:25:30.050790    8432 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:25:30.066756    8432 start.go:159] libmachine.API.Create for "offline-docker-917000" (driver="qemu2")
	I0731 12:25:30.066840    8432 client.go:168] LocalClient.Create starting
	I0731 12:25:30.066927    8432 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:25:30.066959    8432 main.go:141] libmachine: Decoding PEM data...
	I0731 12:25:30.066972    8432 main.go:141] libmachine: Parsing certificate...
	I0731 12:25:30.067016    8432 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:25:30.067038    8432 main.go:141] libmachine: Decoding PEM data...
	I0731 12:25:30.067046    8432 main.go:141] libmachine: Parsing certificate...
	I0731 12:25:30.067488    8432 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:25:30.220520    8432 main.go:141] libmachine: Creating SSH key...
	I0731 12:25:30.321117    8432 main.go:141] libmachine: Creating Disk image...
	I0731 12:25:30.321129    8432 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:25:30.321339    8432 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2
	I0731 12:25:30.331123    8432 main.go:141] libmachine: STDOUT: 
	I0731 12:25:30.331152    8432 main.go:141] libmachine: STDERR: 
	I0731 12:25:30.331207    8432 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2 +20000M
	I0731 12:25:30.341984    8432 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:25:30.341998    8432 main.go:141] libmachine: STDERR: 
	I0731 12:25:30.342019    8432 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2
	I0731 12:25:30.342023    8432 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:25:30.342036    8432 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:25:30.342065    8432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:fc:84:79:c9:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2
	I0731 12:25:30.343588    8432 main.go:141] libmachine: STDOUT: 
	I0731 12:25:30.343604    8432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:25:30.343624    8432 client.go:171] duration metric: took 276.786583ms to LocalClient.Create
	I0731 12:25:32.345670    8432 start.go:128] duration metric: took 2.302372083s to createHost
	I0731 12:25:32.345688    8432 start.go:83] releasing machines lock for "offline-docker-917000", held for 2.302446667s
	W0731 12:25:32.345714    8432 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:32.358326    8432 out.go:177] * Deleting "offline-docker-917000" in qemu2 ...
	W0731 12:25:32.368141    8432 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:32.368152    8432 start.go:729] Will try again in 5 seconds ...
	I0731 12:25:37.370160    8432 start.go:360] acquireMachinesLock for offline-docker-917000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:25:37.370692    8432 start.go:364] duration metric: took 439.666µs to acquireMachinesLock for "offline-docker-917000"
	I0731 12:25:37.370855    8432 start.go:93] Provisioning new machine with config: &{Name:offline-docker-917000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:25:37.371120    8432 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:25:37.386642    8432 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:25:37.437096    8432 start.go:159] libmachine.API.Create for "offline-docker-917000" (driver="qemu2")
	I0731 12:25:37.437144    8432 client.go:168] LocalClient.Create starting
	I0731 12:25:37.437257    8432 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:25:37.437323    8432 main.go:141] libmachine: Decoding PEM data...
	I0731 12:25:37.437342    8432 main.go:141] libmachine: Parsing certificate...
	I0731 12:25:37.437404    8432 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:25:37.437448    8432 main.go:141] libmachine: Decoding PEM data...
	I0731 12:25:37.437460    8432 main.go:141] libmachine: Parsing certificate...
	I0731 12:25:37.437968    8432 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:25:37.665949    8432 main.go:141] libmachine: Creating SSH key...
	I0731 12:25:37.727209    8432 main.go:141] libmachine: Creating Disk image...
	I0731 12:25:37.727213    8432 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:25:37.727393    8432 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2
	I0731 12:25:37.736380    8432 main.go:141] libmachine: STDOUT: 
	I0731 12:25:37.736402    8432 main.go:141] libmachine: STDERR: 
	I0731 12:25:37.736442    8432 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2 +20000M
	I0731 12:25:37.744137    8432 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:25:37.744153    8432 main.go:141] libmachine: STDERR: 
	I0731 12:25:37.744164    8432 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2
	I0731 12:25:37.744170    8432 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:25:37.744182    8432 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:25:37.744217    8432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:0c:ac:9c:0a:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/offline-docker-917000/disk.qcow2
	I0731 12:25:37.745738    8432 main.go:141] libmachine: STDOUT: 
	I0731 12:25:37.745754    8432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:25:37.745766    8432 client.go:171] duration metric: took 308.627584ms to LocalClient.Create
	I0731 12:25:39.747875    8432 start.go:128] duration metric: took 2.376806958s to createHost
	I0731 12:25:39.747941    8432 start.go:83] releasing machines lock for "offline-docker-917000", held for 2.377301417s
	W0731 12:25:39.748427    8432 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:39.764019    8432 out.go:177] 
	W0731 12:25:39.768171    8432 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:25:39.768227    8432 out.go:239] * 
	* 
	W0731 12:25:39.770763    8432 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:25:39.781994    8432 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-917000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-31 12:25:39.799472 -0700 PDT m=+703.586803209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-917000 -n offline-docker-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-917000 -n offline-docker-917000: exit status 7 (63.652042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-917000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-917000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-917000
--- FAIL: TestOffline (10.03s)

                                                
                                    
x
+
TestAddons/Setup (10.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-565000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-565000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.334680959s)

                                                
                                                
-- stdout --
	* [addons-565000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-565000" primary control-plane node in "addons-565000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-565000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:14:35.603108    7095 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:14:35.603237    7095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:35.603244    7095 out.go:304] Setting ErrFile to fd 2...
	I0731 12:14:35.603247    7095 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:35.603388    7095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:14:35.604446    7095 out.go:298] Setting JSON to false
	I0731 12:14:35.620574    7095 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4438,"bootTime":1722448837,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:14:35.620638    7095 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:14:35.625319    7095 out.go:177] * [addons-565000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:14:35.632271    7095 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:14:35.632299    7095 notify.go:220] Checking for updates...
	I0731 12:14:35.639301    7095 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:14:35.642285    7095 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:14:35.645309    7095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:14:35.648264    7095 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:14:35.651288    7095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:14:35.654488    7095 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:14:35.658295    7095 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:14:35.665265    7095 start.go:297] selected driver: qemu2
	I0731 12:14:35.665271    7095 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:14:35.665278    7095 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:14:35.667525    7095 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:14:35.670318    7095 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:14:35.671728    7095 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:14:35.671774    7095 cni.go:84] Creating CNI manager for ""
	I0731 12:14:35.671785    7095 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:14:35.671789    7095 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:14:35.671818    7095 start.go:340] cluster config:
	{Name:addons-565000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-565000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:14:35.675688    7095 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:14:35.683319    7095 out.go:177] * Starting "addons-565000" primary control-plane node in "addons-565000" cluster
	I0731 12:14:35.687278    7095 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:14:35.687296    7095 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:14:35.687310    7095 cache.go:56] Caching tarball of preloaded images
	I0731 12:14:35.687383    7095 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:14:35.687389    7095 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:14:35.687606    7095 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/addons-565000/config.json ...
	I0731 12:14:35.687619    7095 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/addons-565000/config.json: {Name:mk665b0bb3e98cd3bd7f320a7e0e8f3e0249e2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:14:35.687952    7095 start.go:360] acquireMachinesLock for addons-565000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:14:35.688020    7095 start.go:364] duration metric: took 61.917µs to acquireMachinesLock for "addons-565000"
	I0731 12:14:35.688030    7095 start.go:93] Provisioning new machine with config: &{Name:addons-565000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-565000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:14:35.688056    7095 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:14:35.696275    7095 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 12:14:35.714159    7095 start.go:159] libmachine.API.Create for "addons-565000" (driver="qemu2")
	I0731 12:14:35.714186    7095 client.go:168] LocalClient.Create starting
	I0731 12:14:35.714326    7095 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:14:35.777349    7095 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:14:35.905015    7095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:14:36.180571    7095 main.go:141] libmachine: Creating SSH key...
	I0731 12:14:36.253650    7095 main.go:141] libmachine: Creating Disk image...
	I0731 12:14:36.253655    7095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:14:36.253845    7095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2
	I0731 12:14:36.263088    7095 main.go:141] libmachine: STDOUT: 
	I0731 12:14:36.263106    7095 main.go:141] libmachine: STDERR: 
	I0731 12:14:36.263152    7095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2 +20000M
	I0731 12:14:36.270887    7095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:14:36.270903    7095 main.go:141] libmachine: STDERR: 
	I0731 12:14:36.270917    7095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2
	I0731 12:14:36.270920    7095 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:14:36.270950    7095 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:14:36.270983    7095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d5:e1:6e:1d:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2
	I0731 12:14:36.272547    7095 main.go:141] libmachine: STDOUT: 
	I0731 12:14:36.272563    7095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:14:36.272587    7095 client.go:171] duration metric: took 558.406958ms to LocalClient.Create
	I0731 12:14:38.274688    7095 start.go:128] duration metric: took 2.58670225s to createHost
	I0731 12:14:38.274771    7095 start.go:83] releasing machines lock for "addons-565000", held for 2.586826542s
	W0731 12:14:38.274850    7095 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:14:38.287135    7095 out.go:177] * Deleting "addons-565000" in qemu2 ...
	W0731 12:14:38.317459    7095 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:14:38.317488    7095 start.go:729] Will try again in 5 seconds ...
	I0731 12:14:43.319450    7095 start.go:360] acquireMachinesLock for addons-565000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:14:43.319724    7095 start.go:364] duration metric: took 176.625µs to acquireMachinesLock for "addons-565000"
	I0731 12:14:43.319796    7095 start.go:93] Provisioning new machine with config: &{Name:addons-565000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-565000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:14:43.320061    7095 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:14:43.330795    7095 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 12:14:43.383355    7095 start.go:159] libmachine.API.Create for "addons-565000" (driver="qemu2")
	I0731 12:14:43.383408    7095 client.go:168] LocalClient.Create starting
	I0731 12:14:43.383532    7095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:14:43.383595    7095 main.go:141] libmachine: Decoding PEM data...
	I0731 12:14:43.383611    7095 main.go:141] libmachine: Parsing certificate...
	I0731 12:14:43.383716    7095 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:14:43.383774    7095 main.go:141] libmachine: Decoding PEM data...
	I0731 12:14:43.383785    7095 main.go:141] libmachine: Parsing certificate...
	I0731 12:14:43.384361    7095 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:14:43.547282    7095 main.go:141] libmachine: Creating SSH key...
	I0731 12:14:43.844165    7095 main.go:141] libmachine: Creating Disk image...
	I0731 12:14:43.844180    7095 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:14:43.844407    7095 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2
	I0731 12:14:43.854418    7095 main.go:141] libmachine: STDOUT: 
	I0731 12:14:43.854440    7095 main.go:141] libmachine: STDERR: 
	I0731 12:14:43.854499    7095 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2 +20000M
	I0731 12:14:43.862558    7095 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:14:43.862574    7095 main.go:141] libmachine: STDERR: 
	I0731 12:14:43.862588    7095 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2
	I0731 12:14:43.862596    7095 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:14:43.862612    7095 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:14:43.862642    7095 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:1b:44:c8:f7:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/addons-565000/disk.qcow2
	I0731 12:14:43.864268    7095 main.go:141] libmachine: STDOUT: 
	I0731 12:14:43.864283    7095 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:14:43.864295    7095 client.go:171] duration metric: took 480.899125ms to LocalClient.Create
	I0731 12:14:45.865078    7095 start.go:128] duration metric: took 2.545015209s to createHost
	I0731 12:14:45.865173    7095 start.go:83] releasing machines lock for "addons-565000", held for 2.545510458s
	W0731 12:14:45.865584    7095 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-565000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-565000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:14:45.874251    7095 out.go:177] 
	W0731 12:14:45.883185    7095 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:14:45.883231    7095 out.go:239] * 
	* 
	W0731 12:14:45.886229    7095 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:14:45.895014    7095 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-565000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.34s)

                                                
                                    
x
+
TestCertOptions (10.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-558000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-558000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.867600333s)

                                                
                                                
-- stdout --
	* [cert-options-558000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-558000" primary control-plane node in "cert-options-558000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-558000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-558000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-558000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-558000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (77.670292ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-558000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-558000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-558000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-558000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-558000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-558000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.103ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-558000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-558000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-558000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-558000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-558000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-31 12:37:23.314321 -0700 PDT m=+1407.080517209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-558000 -n cert-options-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-558000 -n cert-options-558000: exit status 7 (29.381375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-558000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-558000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-558000
--- FAIL: TestCertOptions (10.13s)

                                                
                                    
x
+
TestCertExpiration (197.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-657000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-657000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.276939041s)

                                                
                                                
-- stdout --
	* [cert-expiration-657000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-657000" primary control-plane node in "cert-expiration-657000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-657000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-657000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-657000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-657000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-657000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.227526542s)

                                                
                                                
-- stdout --
	* [cert-expiration-657000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-657000" primary control-plane node in "cert-expiration-657000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-657000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-657000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-657000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-657000" primary control-plane node in "cert-expiration-657000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-657000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-31 12:40:08.405334 -0700 PDT m=+1572.175051751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-657000 -n cert-expiration-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-657000 -n cert-expiration-657000: exit status 7 (66.369417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-657000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-657000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-657000
--- FAIL: TestCertExpiration (197.66s)

                                                
                                    
x
+
TestDockerFlags (12.31s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-893000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-893000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.076197208s)

                                                
                                                
-- stdout --
	* [docker-flags-893000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-893000" primary control-plane node in "docker-flags-893000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-893000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:37:01.017606    9367 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:37:01.017738    9367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:01.017745    9367 out.go:304] Setting ErrFile to fd 2...
	I0731 12:37:01.017748    9367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:01.017879    9367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:37:01.018959    9367 out.go:298] Setting JSON to false
	I0731 12:37:01.035652    9367 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5784,"bootTime":1722448837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:37:01.035720    9367 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:37:01.042741    9367 out.go:177] * [docker-flags-893000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:37:01.051714    9367 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:37:01.051752    9367 notify.go:220] Checking for updates...
	I0731 12:37:01.059720    9367 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:37:01.063658    9367 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:37:01.066727    9367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:37:01.069703    9367 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:37:01.072645    9367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:37:01.075938    9367 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:01.076009    9367 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:01.076062    9367 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:37:01.079677    9367 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:37:01.086687    9367 start.go:297] selected driver: qemu2
	I0731 12:37:01.086692    9367 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:37:01.086697    9367 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:37:01.088907    9367 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:37:01.091693    9367 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:37:01.094719    9367 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0731 12:37:01.094747    9367 cni.go:84] Creating CNI manager for ""
	I0731 12:37:01.094754    9367 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:37:01.094760    9367 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:37:01.094785    9367 start.go:340] cluster config:
	{Name:docker-flags-893000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-893000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:37:01.098067    9367 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:37:01.105553    9367 out.go:177] * Starting "docker-flags-893000" primary control-plane node in "docker-flags-893000" cluster
	I0731 12:37:01.109670    9367 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:37:01.109688    9367 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:37:01.109698    9367 cache.go:56] Caching tarball of preloaded images
	I0731 12:37:01.109749    9367 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:37:01.109754    9367 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:37:01.109803    9367 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/docker-flags-893000/config.json ...
	I0731 12:37:01.109813    9367 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/docker-flags-893000/config.json: {Name:mk0b3ce2689758760b450130ae26e1a1de49e121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:37:01.110134    9367 start.go:360] acquireMachinesLock for docker-flags-893000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:03.136862    9367 start.go:364] duration metric: took 2.026704167s to acquireMachinesLock for "docker-flags-893000"
	I0731 12:37:03.137076    9367 start.go:93] Provisioning new machine with config: &{Name:docker-flags-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-893000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:03.137391    9367 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:03.145914    9367 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:37:03.197584    9367 start.go:159] libmachine.API.Create for "docker-flags-893000" (driver="qemu2")
	I0731 12:37:03.197645    9367 client.go:168] LocalClient.Create starting
	I0731 12:37:03.197783    9367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:37:03.197840    9367 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:03.197858    9367 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:03.197927    9367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:37:03.197971    9367 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:03.197982    9367 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:03.198630    9367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:37:03.361871    9367 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:03.420501    9367 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:03.420510    9367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:03.420692    9367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2
	I0731 12:37:03.429947    9367 main.go:141] libmachine: STDOUT: 
	I0731 12:37:03.429964    9367 main.go:141] libmachine: STDERR: 
	I0731 12:37:03.430009    9367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2 +20000M
	I0731 12:37:03.437813    9367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:03.437834    9367 main.go:141] libmachine: STDERR: 
	I0731 12:37:03.437846    9367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2
	I0731 12:37:03.437853    9367 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:03.437860    9367 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:03.437891    9367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:64:51:55:4c:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2
	I0731 12:37:03.439545    9367 main.go:141] libmachine: STDOUT: 
	I0731 12:37:03.439559    9367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:03.439577    9367 client.go:171] duration metric: took 241.925542ms to LocalClient.Create
	I0731 12:37:05.441736    9367 start.go:128] duration metric: took 2.304322542s to createHost
	I0731 12:37:05.441813    9367 start.go:83] releasing machines lock for "docker-flags-893000", held for 2.304919917s
	W0731 12:37:05.441915    9367 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:05.455061    9367 out.go:177] * Deleting "docker-flags-893000" in qemu2 ...
	W0731 12:37:05.487906    9367 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:05.487926    9367 start.go:729] Will try again in 5 seconds ...
	I0731 12:37:10.490061    9367 start.go:360] acquireMachinesLock for docker-flags-893000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:10.490506    9367 start.go:364] duration metric: took 371.541µs to acquireMachinesLock for "docker-flags-893000"
	I0731 12:37:10.490618    9367 start.go:93] Provisioning new machine with config: &{Name:docker-flags-893000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-893000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:10.490928    9367 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:10.496578    9367 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:37:10.545697    9367 start.go:159] libmachine.API.Create for "docker-flags-893000" (driver="qemu2")
	I0731 12:37:10.545737    9367 client.go:168] LocalClient.Create starting
	I0731 12:37:10.545846    9367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:37:10.545916    9367 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:10.545933    9367 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:10.546005    9367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:37:10.546057    9367 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:10.546072    9367 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:10.546520    9367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:37:10.720839    9367 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:10.999068    9367 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:10.999077    9367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:10.999633    9367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2
	I0731 12:37:11.009331    9367 main.go:141] libmachine: STDOUT: 
	I0731 12:37:11.009352    9367 main.go:141] libmachine: STDERR: 
	I0731 12:37:11.009403    9367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2 +20000M
	I0731 12:37:11.017217    9367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:11.017232    9367 main.go:141] libmachine: STDERR: 
	I0731 12:37:11.017242    9367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2
	I0731 12:37:11.017246    9367 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:11.017260    9367 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:11.017291    9367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:f7:07:95:fd:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/docker-flags-893000/disk.qcow2
	I0731 12:37:11.018960    9367 main.go:141] libmachine: STDOUT: 
	I0731 12:37:11.018973    9367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:11.018984    9367 client.go:171] duration metric: took 473.248208ms to LocalClient.Create
	I0731 12:37:13.021236    9367 start.go:128] duration metric: took 2.530289291s to createHost
	I0731 12:37:13.021304    9367 start.go:83] releasing machines lock for "docker-flags-893000", held for 2.530805541s
	W0731 12:37:13.021659    9367 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-893000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-893000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:13.032303    9367 out.go:177] 
	W0731 12:37:13.036291    9367 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:37:13.036335    9367 out.go:239] * 
	* 
	W0731 12:37:13.039033    9367 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:37:13.048329    9367 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-893000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-893000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-893000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (78.345416ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-893000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-893000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-893000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-893000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-893000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-893000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.84125ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-893000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-893000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-893000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-893000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-893000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-31 12:37:13.187564 -0700 PDT m=+1396.953607709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-893000 -n docker-flags-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-893000 -n docker-flags-893000: exit status 7 (29.250875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-893000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-893000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-893000
--- FAIL: TestDockerFlags (12.31s)

                                                
                                    
x
+
TestForceSystemdFlag (12.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-908000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-908000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.273293208s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-908000" primary control-plane node in "force-systemd-flag-908000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:36:24.757946    9208 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:36:24.758129    9208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:36:24.758136    9208 out.go:304] Setting ErrFile to fd 2...
	I0731 12:36:24.758139    9208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:36:24.758291    9208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:36:24.759712    9208 out.go:298] Setting JSON to false
	I0731 12:36:24.779176    9208 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5747,"bootTime":1722448837,"procs":490,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:36:24.779286    9208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:36:24.787472    9208 out.go:177] * [force-systemd-flag-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:36:24.796808    9208 notify.go:220] Checking for updates...
	I0731 12:36:24.801704    9208 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:36:24.809727    9208 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:36:24.818719    9208 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:36:24.825731    9208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:36:24.832697    9208 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:36:24.839631    9208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:36:24.844175    9208 config.go:182] Loaded profile config "NoKubernetes-911000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:36:24.844251    9208 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:36:24.844313    9208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:36:24.850733    9208 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:36:24.854688    9208 start.go:297] selected driver: qemu2
	I0731 12:36:24.854695    9208 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:36:24.854710    9208 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:36:24.857713    9208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:36:24.861725    9208 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:36:24.865831    9208 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:36:24.865890    9208 cni.go:84] Creating CNI manager for ""
	I0731 12:36:24.865900    9208 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:36:24.865905    9208 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:36:24.865950    9208 start.go:340] cluster config:
	{Name:force-systemd-flag-908000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:36:24.870680    9208 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:36:24.876789    9208 out.go:177] * Starting "force-systemd-flag-908000" primary control-plane node in "force-systemd-flag-908000" cluster
	I0731 12:36:24.880807    9208 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:36:24.880851    9208 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:36:24.880879    9208 cache.go:56] Caching tarball of preloaded images
	I0731 12:36:24.880971    9208 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:36:24.880978    9208 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:36:24.881051    9208 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/force-systemd-flag-908000/config.json ...
	I0731 12:36:24.881063    9208 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/force-systemd-flag-908000/config.json: {Name:mk3a01d0c1a4bf5a62a5ad940c4732b0a977c696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:36:24.881300    9208 start.go:360] acquireMachinesLock for force-systemd-flag-908000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:36:27.070550    9208 start.go:364] duration metric: took 2.188846375s to acquireMachinesLock for "force-systemd-flag-908000"
	I0731 12:36:27.070725    9208 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:36:27.070942    9208 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:36:27.081307    9208 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:36:27.133605    9208 start.go:159] libmachine.API.Create for "force-systemd-flag-908000" (driver="qemu2")
	I0731 12:36:27.133669    9208 client.go:168] LocalClient.Create starting
	I0731 12:36:27.133847    9208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:36:27.133913    9208 main.go:141] libmachine: Decoding PEM data...
	I0731 12:36:27.133931    9208 main.go:141] libmachine: Parsing certificate...
	I0731 12:36:27.134002    9208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:36:27.134051    9208 main.go:141] libmachine: Decoding PEM data...
	I0731 12:36:27.134068    9208 main.go:141] libmachine: Parsing certificate...
	I0731 12:36:27.134739    9208 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:36:27.373345    9208 main.go:141] libmachine: Creating SSH key...
	I0731 12:36:27.437908    9208 main.go:141] libmachine: Creating Disk image...
	I0731 12:36:27.437917    9208 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:36:27.438098    9208 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2
	I0731 12:36:27.447432    9208 main.go:141] libmachine: STDOUT: 
	I0731 12:36:27.447452    9208 main.go:141] libmachine: STDERR: 
	I0731 12:36:27.447513    9208 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2 +20000M
	I0731 12:36:27.455472    9208 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:36:27.455490    9208 main.go:141] libmachine: STDERR: 
	I0731 12:36:27.455503    9208 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2
	I0731 12:36:27.455507    9208 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:36:27.455524    9208 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:36:27.455549    9208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:2a:22:b2:1f:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2
	I0731 12:36:27.457138    9208 main.go:141] libmachine: STDOUT: 
	I0731 12:36:27.457153    9208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:36:27.457172    9208 client.go:171] duration metric: took 323.448041ms to LocalClient.Create
	I0731 12:36:29.459629    9208 start.go:128] duration metric: took 2.388314166s to createHost
	I0731 12:36:29.459705    9208 start.go:83] releasing machines lock for "force-systemd-flag-908000", held for 2.388770209s
	W0731 12:36:29.459799    9208 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:36:29.483323    9208 out.go:177] * Deleting "force-systemd-flag-908000" in qemu2 ...
	W0731 12:36:29.513930    9208 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:36:29.513957    9208 start.go:729] Will try again in 5 seconds ...
	I0731 12:36:34.514867    9208 start.go:360] acquireMachinesLock for force-systemd-flag-908000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:36:34.521950    9208 start.go:364] duration metric: took 7.01375ms to acquireMachinesLock for "force-systemd-flag-908000"
	I0731 12:36:34.522051    9208 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:36:34.522343    9208 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:36:34.532761    9208 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:36:34.580342    9208 start.go:159] libmachine.API.Create for "force-systemd-flag-908000" (driver="qemu2")
	I0731 12:36:34.580391    9208 client.go:168] LocalClient.Create starting
	I0731 12:36:34.580553    9208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:36:34.580623    9208 main.go:141] libmachine: Decoding PEM data...
	I0731 12:36:34.580638    9208 main.go:141] libmachine: Parsing certificate...
	I0731 12:36:34.580698    9208 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:36:34.580741    9208 main.go:141] libmachine: Decoding PEM data...
	I0731 12:36:34.580756    9208 main.go:141] libmachine: Parsing certificate...
	I0731 12:36:34.581305    9208 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:36:34.827994    9208 main.go:141] libmachine: Creating SSH key...
	I0731 12:36:34.928557    9208 main.go:141] libmachine: Creating Disk image...
	I0731 12:36:34.928563    9208 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:36:34.928736    9208 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2
	I0731 12:36:34.938185    9208 main.go:141] libmachine: STDOUT: 
	I0731 12:36:34.938204    9208 main.go:141] libmachine: STDERR: 
	I0731 12:36:34.938250    9208 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2 +20000M
	I0731 12:36:34.946135    9208 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:36:34.946151    9208 main.go:141] libmachine: STDERR: 
	I0731 12:36:34.946163    9208 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2
	I0731 12:36:34.946172    9208 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:36:34.946180    9208 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:36:34.946222    9208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e1:10:49:7c:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-flag-908000/disk.qcow2
	I0731 12:36:34.947944    9208 main.go:141] libmachine: STDOUT: 
	I0731 12:36:34.947962    9208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:36:34.947973    9208 client.go:171] duration metric: took 367.54575ms to LocalClient.Create
	I0731 12:36:36.950308    9208 start.go:128] duration metric: took 2.42774075s to createHost
	I0731 12:36:36.950372    9208 start.go:83] releasing machines lock for "force-systemd-flag-908000", held for 2.428160917s
	W0731 12:36:36.950699    9208 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:36:36.959160    9208 out.go:177] 
	W0731 12:36:36.966519    9208 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:36:36.966546    9208 out.go:239] * 
	* 
	W0731 12:36:36.969156    9208 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:36:36.981315    9208 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-908000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-908000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-908000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.305333ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-908000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-908000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-31 12:36:37.078458 -0700 PDT m=+1360.845023001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-908000 -n force-systemd-flag-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-908000 -n force-systemd-flag-908000: exit status 7 (32.548208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-908000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-908000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-908000
--- FAIL: TestForceSystemdFlag (12.46s)

                                                
                                    
x
+
TestForceSystemdEnv (10.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-494000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-494000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.000660958s)

                                                
                                                
-- stdout --
	* [force-systemd-env-494000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-494000" primary control-plane node in "force-systemd-env-494000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-494000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:36:50.794105    9324 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:36:50.794227    9324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:36:50.794230    9324 out.go:304] Setting ErrFile to fd 2...
	I0731 12:36:50.794233    9324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:36:50.794355    9324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:36:50.795472    9324 out.go:298] Setting JSON to false
	I0731 12:36:50.812605    9324 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5773,"bootTime":1722448837,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:36:50.812687    9324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:36:50.817613    9324 out.go:177] * [force-systemd-env-494000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:36:50.825603    9324 notify.go:220] Checking for updates...
	I0731 12:36:50.829533    9324 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:36:50.838558    9324 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:36:50.845527    9324 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:36:50.852495    9324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:36:50.855548    9324 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:36:50.862500    9324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0731 12:36:50.866889    9324 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:36:50.866959    9324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:36:50.870404    9324 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:36:50.877596    9324 start.go:297] selected driver: qemu2
	I0731 12:36:50.877603    9324 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:36:50.877609    9324 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:36:50.879566    9324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:36:50.880905    9324 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:36:50.883591    9324 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:36:50.883605    9324 cni.go:84] Creating CNI manager for ""
	I0731 12:36:50.883612    9324 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:36:50.883616    9324 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:36:50.883651    9324 start.go:340] cluster config:
	{Name:force-systemd-env-494000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:36:50.886955    9324 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:36:50.895510    9324 out.go:177] * Starting "force-systemd-env-494000" primary control-plane node in "force-systemd-env-494000" cluster
	I0731 12:36:50.900534    9324 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:36:50.900561    9324 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:36:50.900573    9324 cache.go:56] Caching tarball of preloaded images
	I0731 12:36:50.900637    9324 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:36:50.900642    9324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:36:50.900708    9324 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/force-systemd-env-494000/config.json ...
	I0731 12:36:50.900718    9324 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/force-systemd-env-494000/config.json: {Name:mkbf63cee370ea56f707b7c61d09867a47ee502d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:36:50.900902    9324 start.go:360] acquireMachinesLock for force-systemd-env-494000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:36:50.900935    9324 start.go:364] duration metric: took 24.416µs to acquireMachinesLock for "force-systemd-env-494000"
	I0731 12:36:50.900945    9324 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:36:50.900976    9324 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:36:50.905542    9324 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:36:50.922121    9324 start.go:159] libmachine.API.Create for "force-systemd-env-494000" (driver="qemu2")
	I0731 12:36:50.922145    9324 client.go:168] LocalClient.Create starting
	I0731 12:36:50.922209    9324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:36:50.922241    9324 main.go:141] libmachine: Decoding PEM data...
	I0731 12:36:50.922250    9324 main.go:141] libmachine: Parsing certificate...
	I0731 12:36:50.922298    9324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:36:50.922322    9324 main.go:141] libmachine: Decoding PEM data...
	I0731 12:36:50.922332    9324 main.go:141] libmachine: Parsing certificate...
	I0731 12:36:50.922774    9324 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:36:51.179403    9324 main.go:141] libmachine: Creating SSH key...
	I0731 12:36:51.256563    9324 main.go:141] libmachine: Creating Disk image...
	I0731 12:36:51.256569    9324 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:36:51.256754    9324 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2
	I0731 12:36:51.266135    9324 main.go:141] libmachine: STDOUT: 
	I0731 12:36:51.266158    9324 main.go:141] libmachine: STDERR: 
	I0731 12:36:51.266198    9324 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2 +20000M
	I0731 12:36:51.273969    9324 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:36:51.273983    9324 main.go:141] libmachine: STDERR: 
	I0731 12:36:51.273993    9324 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2
	I0731 12:36:51.273997    9324 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:36:51.274009    9324 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:36:51.274040    9324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:83:e9:6a:41:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2
	I0731 12:36:51.275627    9324 main.go:141] libmachine: STDOUT: 
	I0731 12:36:51.275642    9324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:36:51.275660    9324 client.go:171] duration metric: took 353.504917ms to LocalClient.Create
	I0731 12:36:53.277872    9324 start.go:128] duration metric: took 2.376841333s to createHost
	I0731 12:36:53.277927    9324 start.go:83] releasing machines lock for "force-systemd-env-494000", held for 2.376950375s
	W0731 12:36:53.278020    9324 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:36:53.301297    9324 out.go:177] * Deleting "force-systemd-env-494000" in qemu2 ...
	W0731 12:36:53.324838    9324 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:36:53.324857    9324 start.go:729] Will try again in 5 seconds ...
	I0731 12:36:58.327163    9324 start.go:360] acquireMachinesLock for force-systemd-env-494000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:36:58.327680    9324 start.go:364] duration metric: took 377.542µs to acquireMachinesLock for "force-systemd-env-494000"
	I0731 12:36:58.327832    9324 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:36:58.328068    9324 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:36:58.335713    9324 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 12:36:58.384726    9324 start.go:159] libmachine.API.Create for "force-systemd-env-494000" (driver="qemu2")
	I0731 12:36:58.384780    9324 client.go:168] LocalClient.Create starting
	I0731 12:36:58.384886    9324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:36:58.384942    9324 main.go:141] libmachine: Decoding PEM data...
	I0731 12:36:58.384960    9324 main.go:141] libmachine: Parsing certificate...
	I0731 12:36:58.385022    9324 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:36:58.385066    9324 main.go:141] libmachine: Decoding PEM data...
	I0731 12:36:58.385077    9324 main.go:141] libmachine: Parsing certificate...
	I0731 12:36:58.385567    9324 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:36:58.546370    9324 main.go:141] libmachine: Creating SSH key...
	I0731 12:36:58.699995    9324 main.go:141] libmachine: Creating Disk image...
	I0731 12:36:58.700011    9324 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:36:58.700215    9324 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2
	I0731 12:36:58.709872    9324 main.go:141] libmachine: STDOUT: 
	I0731 12:36:58.709894    9324 main.go:141] libmachine: STDERR: 
	I0731 12:36:58.709947    9324 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2 +20000M
	I0731 12:36:58.717838    9324 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:36:58.717852    9324 main.go:141] libmachine: STDERR: 
	I0731 12:36:58.717876    9324 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2
	I0731 12:36:58.717882    9324 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:36:58.717893    9324 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:36:58.717918    9324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:35:d2:c6:9b:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/force-systemd-env-494000/disk.qcow2
	I0731 12:36:58.719525    9324 main.go:141] libmachine: STDOUT: 
	I0731 12:36:58.719542    9324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:36:58.719554    9324 client.go:171] duration metric: took 334.769458ms to LocalClient.Create
	I0731 12:37:00.721721    9324 start.go:128] duration metric: took 2.3936235s to createHost
	I0731 12:37:00.721771    9324 start.go:83] releasing machines lock for "force-systemd-env-494000", held for 2.394063208s
	W0731 12:37:00.722082    9324 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-494000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-494000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:00.738722    9324 out.go:177] 
	W0731 12:37:00.743849    9324 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:37:00.743888    9324 out.go:239] * 
	* 
	W0731 12:37:00.746054    9324 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:37:00.753611    9324 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-494000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-494000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-494000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (70.908625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-494000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-494000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-494000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-31 12:37:00.839435 -0700 PDT m=+1384.605382001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-494000 -n force-systemd-env-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-494000 -n force-systemd-env-494000: exit status 7 (34.942542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-494000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-494000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-494000
--- FAIL: TestForceSystemdEnv (10.22s)

                                                
                                    
x
+
TestErrorSpam/setup (9.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-249000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-249000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 --driver=qemu2 : exit status 80 (9.982141333s)

                                                
                                                
-- stdout --
	* [nospam-249000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-249000" primary control-plane node in "nospam-249000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-249000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-249000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-249000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-249000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-249000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19355
- KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-249000" primary control-plane node in "nospam-249000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-249000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-249000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.98s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-373000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-373000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.813272375s)

                                                
                                                
-- stdout --
	* [functional-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-373000" primary control-plane node in "functional-373000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-373000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51083 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51083 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51083 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-373000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-373000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19355
- KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-373000" primary control-plane node in "functional-373000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-373000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51083 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51083 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51083 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-373000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (71.890042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.89s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-373000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-373000 --alsologtostderr -v=8: exit status 80 (5.184658375s)

                                                
                                                
-- stdout --
	* [functional-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-373000" primary control-plane node in "functional-373000" cluster
	* Restarting existing qemu2 VM for "functional-373000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-373000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:15:16.267895    7245 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:15:16.268063    7245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:16.268066    7245 out.go:304] Setting ErrFile to fd 2...
	I0731 12:15:16.268068    7245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:16.268222    7245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:15:16.269447    7245 out.go:298] Setting JSON to false
	I0731 12:15:16.285985    7245 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4479,"bootTime":1722448837,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:15:16.286048    7245 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:15:16.290854    7245 out.go:177] * [functional-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:15:16.297812    7245 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:15:16.297845    7245 notify.go:220] Checking for updates...
	I0731 12:15:16.304849    7245 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:15:16.307842    7245 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:15:16.310863    7245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:15:16.313842    7245 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:15:16.316834    7245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:15:16.320113    7245 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:15:16.320169    7245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:15:16.324842    7245 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:15:16.331826    7245 start.go:297] selected driver: qemu2
	I0731 12:15:16.331832    7245 start.go:901] validating driver "qemu2" against &{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:15:16.331901    7245 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:15:16.334241    7245 cni.go:84] Creating CNI manager for ""
	I0731 12:15:16.334256    7245 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:15:16.334301    7245 start.go:340] cluster config:
	{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-373000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:15:16.337727    7245 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:15:16.345842    7245 out.go:177] * Starting "functional-373000" primary control-plane node in "functional-373000" cluster
	I0731 12:15:16.349840    7245 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:15:16.349863    7245 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:15:16.349875    7245 cache.go:56] Caching tarball of preloaded images
	I0731 12:15:16.349948    7245 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:15:16.349958    7245 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:15:16.350007    7245 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/functional-373000/config.json ...
	I0731 12:15:16.350477    7245 start.go:360] acquireMachinesLock for functional-373000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:15:16.350508    7245 start.go:364] duration metric: took 25.083µs to acquireMachinesLock for "functional-373000"
	I0731 12:15:16.350517    7245 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:15:16.350522    7245 fix.go:54] fixHost starting: 
	I0731 12:15:16.350632    7245 fix.go:112] recreateIfNeeded on functional-373000: state=Stopped err=<nil>
	W0731 12:15:16.350640    7245 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:15:16.358845    7245 out.go:177] * Restarting existing qemu2 VM for "functional-373000" ...
	I0731 12:15:16.361746    7245 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:15:16.361796    7245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:20:c8:2d:e7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/disk.qcow2
	I0731 12:15:16.363792    7245 main.go:141] libmachine: STDOUT: 
	I0731 12:15:16.363810    7245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:15:16.363833    7245 fix.go:56] duration metric: took 13.310333ms for fixHost
	I0731 12:15:16.363837    7245 start.go:83] releasing machines lock for "functional-373000", held for 13.325ms
	W0731 12:15:16.363844    7245 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:15:16.363872    7245 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:15:16.363876    7245 start.go:729] Will try again in 5 seconds ...
	I0731 12:15:21.365939    7245 start.go:360] acquireMachinesLock for functional-373000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:15:21.366334    7245 start.go:364] duration metric: took 293.458µs to acquireMachinesLock for "functional-373000"
	I0731 12:15:21.366472    7245 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:15:21.366491    7245 fix.go:54] fixHost starting: 
	I0731 12:15:21.367182    7245 fix.go:112] recreateIfNeeded on functional-373000: state=Stopped err=<nil>
	W0731 12:15:21.367207    7245 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:15:21.371559    7245 out.go:177] * Restarting existing qemu2 VM for "functional-373000" ...
	I0731 12:15:21.375707    7245 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:15:21.375898    7245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:20:c8:2d:e7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/disk.qcow2
	I0731 12:15:21.384776    7245 main.go:141] libmachine: STDOUT: 
	I0731 12:15:21.384859    7245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:15:21.384943    7245 fix.go:56] duration metric: took 18.450375ms for fixHost
	I0731 12:15:21.384968    7245 start.go:83] releasing machines lock for "functional-373000", held for 18.612208ms
	W0731 12:15:21.385129    7245 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-373000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-373000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:15:21.393668    7245 out.go:177] 
	W0731 12:15:21.397820    7245 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:15:21.397844    7245 out.go:239] * 
	* 
	W0731 12:15:21.400492    7245 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:15:21.407701    7245 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-373000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.186535916s for "functional-373000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (70.533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.478667ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-373000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (29.594542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-373000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-373000 get po -A: exit status 1 (25.86175ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-373000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-373000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-373000\n"*: args "kubectl --context functional-373000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-373000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (30.045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh sudo crictl images: exit status 83 (49.84825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-373000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (40.823ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-373000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.962583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.832167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-373000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 kubectl -- --context functional-373000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 kubectl -- --context functional-373000 get pods: exit status 1 (704.635083ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-373000
	* no server found for cluster "functional-373000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-373000 kubectl -- --context functional-373000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (31.3275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-373000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-373000 get pods: exit status 1 (945.156875ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-373000
	* no server found for cluster "functional-373000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-373000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (29.444458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.98s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-373000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-373000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.187752292s)

                                                
                                                
-- stdout --
	* [functional-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-373000" primary control-plane node in "functional-373000" cluster
	* Restarting existing qemu2 VM for "functional-373000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-373000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-373000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-373000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.18835775s for "functional-373000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (68.6355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-373000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-373000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.365042ms)

                                                
                                                
** stderr ** 
	error: context "functional-373000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-373000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (29.386042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 logs: exit status 83 (75.69575ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:13 PDT |                     |
	|         | -p download-only-203000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| start   | -o=json --download-only                                                  | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | -p download-only-323000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| delete  | -p download-only-323000                                                  | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| start   | -o=json --download-only                                                  | download-only-745000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | -p download-only-745000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| delete  | -p download-only-745000                                                  | download-only-745000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| delete  | -p download-only-323000                                                  | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| delete  | -p download-only-745000                                                  | download-only-745000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| start   | --download-only -p                                                       | binary-mirror-399000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | binary-mirror-399000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51047                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-399000                                                  | binary-mirror-399000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| addons  | enable dashboard -p                                                      | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | addons-565000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | addons-565000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-565000 --wait=true                                             | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-565000                                                         | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| start   | -p nospam-249000 -n=1 --memory=2250 --wait=false                         | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:15 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-249000                                                         | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| start   | -p functional-373000                                                     | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-373000                                                     | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | minikube-local-cache-test:functional-373000                              |                      |         |         |                     |                     |
	| cache   | functional-373000 cache delete                                           | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | minikube-local-cache-test:functional-373000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| ssh     | functional-373000 ssh sudo                                               | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-373000                                                        | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-373000 ssh                                                    | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-373000 cache reload                                           | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	| ssh     | functional-373000 ssh                                                    | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-373000 kubectl --                                             | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | --context functional-373000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-373000                                                     | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 12:15:26
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:15:26.499142    7320 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:15:26.499252    7320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:26.499254    7320 out.go:304] Setting ErrFile to fd 2...
	I0731 12:15:26.499256    7320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:15:26.499369    7320 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:15:26.500428    7320 out.go:298] Setting JSON to false
	I0731 12:15:26.516134    7320 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4489,"bootTime":1722448837,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:15:26.516205    7320 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:15:26.520815    7320 out.go:177] * [functional-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:15:26.527742    7320 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:15:26.527799    7320 notify.go:220] Checking for updates...
	I0731 12:15:26.535755    7320 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:15:26.539751    7320 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:15:26.542784    7320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:15:26.545796    7320 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:15:26.548766    7320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:15:26.552005    7320 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:15:26.552057    7320 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:15:26.558711    7320 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:15:26.567731    7320 start.go:297] selected driver: qemu2
	I0731 12:15:26.567740    7320 start.go:901] validating driver "qemu2" against &{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:15:26.567810    7320 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:15:26.570166    7320 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:15:26.570186    7320 cni.go:84] Creating CNI manager for ""
	I0731 12:15:26.570196    7320 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:15:26.570238    7320 start.go:340] cluster config:
	{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-373000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:15:26.573913    7320 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:15:26.582746    7320 out.go:177] * Starting "functional-373000" primary control-plane node in "functional-373000" cluster
	I0731 12:15:26.588725    7320 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:15:26.588742    7320 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:15:26.588752    7320 cache.go:56] Caching tarball of preloaded images
	I0731 12:15:26.588820    7320 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:15:26.588824    7320 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:15:26.588881    7320 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/functional-373000/config.json ...
	I0731 12:15:26.589723    7320 start.go:360] acquireMachinesLock for functional-373000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:15:26.589759    7320 start.go:364] duration metric: took 30.959µs to acquireMachinesLock for "functional-373000"
	I0731 12:15:26.589766    7320 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:15:26.589771    7320 fix.go:54] fixHost starting: 
	I0731 12:15:26.589891    7320 fix.go:112] recreateIfNeeded on functional-373000: state=Stopped err=<nil>
	W0731 12:15:26.589898    7320 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:15:26.595783    7320 out.go:177] * Restarting existing qemu2 VM for "functional-373000" ...
	I0731 12:15:26.598740    7320 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:15:26.598777    7320 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:20:c8:2d:e7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/disk.qcow2
	I0731 12:15:26.600816    7320 main.go:141] libmachine: STDOUT: 
	I0731 12:15:26.600833    7320 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:15:26.600862    7320 fix.go:56] duration metric: took 11.092375ms for fixHost
	I0731 12:15:26.600864    7320 start.go:83] releasing machines lock for "functional-373000", held for 11.102584ms
	W0731 12:15:26.600871    7320 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:15:26.600910    7320 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:15:26.600914    7320 start.go:729] Will try again in 5 seconds ...
	I0731 12:15:31.603016    7320 start.go:360] acquireMachinesLock for functional-373000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:15:31.603495    7320 start.go:364] duration metric: took 393.209µs to acquireMachinesLock for "functional-373000"
	I0731 12:15:31.603678    7320 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:15:31.603692    7320 fix.go:54] fixHost starting: 
	I0731 12:15:31.604450    7320 fix.go:112] recreateIfNeeded on functional-373000: state=Stopped err=<nil>
	W0731 12:15:31.604469    7320 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:15:31.608034    7320 out.go:177] * Restarting existing qemu2 VM for "functional-373000" ...
	I0731 12:15:31.613806    7320 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:15:31.614041    7320 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:20:c8:2d:e7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/disk.qcow2
	I0731 12:15:31.624084    7320 main.go:141] libmachine: STDOUT: 
	I0731 12:15:31.624143    7320 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:15:31.624249    7320 fix.go:56] duration metric: took 20.558667ms for fixHost
	I0731 12:15:31.624262    7320 start.go:83] releasing machines lock for "functional-373000", held for 20.750334ms
	W0731 12:15:31.624450    7320 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-373000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:15:31.633844    7320 out.go:177] 
	W0731 12:15:31.637934    7320 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:15:31.637990    7320 out.go:239] * 
	W0731 12:15:31.640540    7320 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:15:31.647845    7320 out.go:177] 
	
	
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-373000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:13 PDT |                     |
|         | -p download-only-203000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| start   | -o=json --download-only                                                  | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | -p download-only-323000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-323000                                                  | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| start   | -o=json --download-only                                                  | download-only-745000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | -p download-only-745000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-745000                                                  | download-only-745000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-323000                                                  | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-745000                                                  | download-only-745000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| start   | --download-only -p                                                       | binary-mirror-399000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | binary-mirror-399000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51047                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-399000                                                  | binary-mirror-399000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| addons  | enable dashboard -p                                                      | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | addons-565000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | addons-565000                                                            |                      |         |         |                     |                     |
| start   | -p addons-565000 --wait=true                                             | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-565000                                                         | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| start   | -p nospam-249000 -n=1 --memory=2250 --wait=false                         | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:15 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-249000                                                         | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| start   | -p functional-373000                                                     | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-373000                                                     | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | minikube-local-cache-test:functional-373000                              |                      |         |         |                     |                     |
| cache   | functional-373000 cache delete                                           | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | minikube-local-cache-test:functional-373000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| ssh     | functional-373000 ssh sudo                                               | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-373000                                                        | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-373000 ssh                                                    | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-373000 cache reload                                           | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| ssh     | functional-373000 ssh                                                    | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-373000 kubectl --                                             | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --context functional-373000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-373000                                                     | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/31 12:15:26
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0731 12:15:26.499142    7320 out.go:291] Setting OutFile to fd 1 ...
I0731 12:15:26.499252    7320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:15:26.499254    7320 out.go:304] Setting ErrFile to fd 2...
I0731 12:15:26.499256    7320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:15:26.499369    7320 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
I0731 12:15:26.500428    7320 out.go:298] Setting JSON to false
I0731 12:15:26.516134    7320 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4489,"bootTime":1722448837,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0731 12:15:26.516205    7320 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0731 12:15:26.520815    7320 out.go:177] * [functional-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0731 12:15:26.527742    7320 out.go:177]   - MINIKUBE_LOCATION=19355
I0731 12:15:26.527799    7320 notify.go:220] Checking for updates...
I0731 12:15:26.535755    7320 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
I0731 12:15:26.539751    7320 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0731 12:15:26.542784    7320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0731 12:15:26.545796    7320 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
I0731 12:15:26.548766    7320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0731 12:15:26.552005    7320 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:15:26.552057    7320 driver.go:392] Setting default libvirt URI to qemu:///system
I0731 12:15:26.558711    7320 out.go:177] * Using the qemu2 driver based on existing profile
I0731 12:15:26.567731    7320 start.go:297] selected driver: qemu2
I0731 12:15:26.567740    7320 start.go:901] validating driver "qemu2" against &{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0731 12:15:26.567810    7320 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0731 12:15:26.570166    7320 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0731 12:15:26.570186    7320 cni.go:84] Creating CNI manager for ""
I0731 12:15:26.570196    7320 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0731 12:15:26.570238    7320 start.go:340] cluster config:
{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-373000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0731 12:15:26.573913    7320 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0731 12:15:26.582746    7320 out.go:177] * Starting "functional-373000" primary control-plane node in "functional-373000" cluster
I0731 12:15:26.588725    7320 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0731 12:15:26.588742    7320 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0731 12:15:26.588752    7320 cache.go:56] Caching tarball of preloaded images
I0731 12:15:26.588820    7320 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0731 12:15:26.588824    7320 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0731 12:15:26.588881    7320 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/functional-373000/config.json ...
I0731 12:15:26.589723    7320 start.go:360] acquireMachinesLock for functional-373000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 12:15:26.589759    7320 start.go:364] duration metric: took 30.959µs to acquireMachinesLock for "functional-373000"
I0731 12:15:26.589766    7320 start.go:96] Skipping create...Using existing machine configuration
I0731 12:15:26.589771    7320 fix.go:54] fixHost starting: 
I0731 12:15:26.589891    7320 fix.go:112] recreateIfNeeded on functional-373000: state=Stopped err=<nil>
W0731 12:15:26.589898    7320 fix.go:138] unexpected machine state, will restart: <nil>
I0731 12:15:26.595783    7320 out.go:177] * Restarting existing qemu2 VM for "functional-373000" ...
I0731 12:15:26.598740    7320 qemu.go:418] Using hvf for hardware acceleration
I0731 12:15:26.598777    7320 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:20:c8:2d:e7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/disk.qcow2
I0731 12:15:26.600816    7320 main.go:141] libmachine: STDOUT: 
I0731 12:15:26.600833    7320 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 12:15:26.600862    7320 fix.go:56] duration metric: took 11.092375ms for fixHost
I0731 12:15:26.600864    7320 start.go:83] releasing machines lock for "functional-373000", held for 11.102584ms
W0731 12:15:26.600871    7320 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 12:15:26.600910    7320 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 12:15:26.600914    7320 start.go:729] Will try again in 5 seconds ...
I0731 12:15:31.603016    7320 start.go:360] acquireMachinesLock for functional-373000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 12:15:31.603495    7320 start.go:364] duration metric: took 393.209µs to acquireMachinesLock for "functional-373000"
I0731 12:15:31.603678    7320 start.go:96] Skipping create...Using existing machine configuration
I0731 12:15:31.603692    7320 fix.go:54] fixHost starting: 
I0731 12:15:31.604450    7320 fix.go:112] recreateIfNeeded on functional-373000: state=Stopped err=<nil>
W0731 12:15:31.604469    7320 fix.go:138] unexpected machine state, will restart: <nil>
I0731 12:15:31.608034    7320 out.go:177] * Restarting existing qemu2 VM for "functional-373000" ...
I0731 12:15:31.613806    7320 qemu.go:418] Using hvf for hardware acceleration
I0731 12:15:31.614041    7320 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:20:c8:2d:e7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/disk.qcow2
I0731 12:15:31.624084    7320 main.go:141] libmachine: STDOUT: 
I0731 12:15:31.624143    7320 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 12:15:31.624249    7320 fix.go:56] duration metric: took 20.558667ms for fixHost
I0731 12:15:31.624262    7320 start.go:83] releasing machines lock for "functional-373000", held for 20.750334ms
W0731 12:15:31.624450    7320 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-373000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 12:15:31.633844    7320 out.go:177] 
W0731 12:15:31.637934    7320 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 12:15:31.637990    7320 out.go:239] * 
W0731 12:15:31.640540    7320 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 12:15:31.647845    7320 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd4218532212/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:13 PDT |                     |
|         | -p download-only-203000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| start   | -o=json --download-only                                                  | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | -p download-only-323000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-323000                                                  | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| start   | -o=json --download-only                                                  | download-only-745000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | -p download-only-745000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-745000                                                  | download-only-745000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-203000                                                  | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-323000                                                  | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| delete  | -p download-only-745000                                                  | download-only-745000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| start   | --download-only -p                                                       | binary-mirror-399000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | binary-mirror-399000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51047                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-399000                                                  | binary-mirror-399000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| addons  | enable dashboard -p                                                      | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | addons-565000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | addons-565000                                                            |                      |         |         |                     |                     |
| start   | -p addons-565000 --wait=true                                             | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-565000                                                         | addons-565000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
| start   | -p nospam-249000 -n=1 --memory=2250 --wait=false                         | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:15 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-249000 --log_dir                                                  | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-249000                                                         | nospam-249000        | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| start   | -p functional-373000                                                     | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-373000                                                     | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-373000 cache add                                              | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | minikube-local-cache-test:functional-373000                              |                      |         |         |                     |                     |
| cache   | functional-373000 cache delete                                           | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | minikube-local-cache-test:functional-373000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| ssh     | functional-373000 ssh sudo                                               | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-373000                                                        | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-373000 ssh                                                    | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-373000 cache reload                                           | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
| ssh     | functional-373000 ssh                                                    | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT | 31 Jul 24 12:15 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-373000 kubectl --                                             | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --context functional-373000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-373000                                                     | functional-373000    | jenkins | v1.33.1 | 31 Jul 24 12:15 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/31 12:15:26
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0731 12:15:26.499142    7320 out.go:291] Setting OutFile to fd 1 ...
I0731 12:15:26.499252    7320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:15:26.499254    7320 out.go:304] Setting ErrFile to fd 2...
I0731 12:15:26.499256    7320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:15:26.499369    7320 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
I0731 12:15:26.500428    7320 out.go:298] Setting JSON to false
I0731 12:15:26.516134    7320 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4489,"bootTime":1722448837,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0731 12:15:26.516205    7320 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0731 12:15:26.520815    7320 out.go:177] * [functional-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0731 12:15:26.527742    7320 out.go:177]   - MINIKUBE_LOCATION=19355
I0731 12:15:26.527799    7320 notify.go:220] Checking for updates...
I0731 12:15:26.535755    7320 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
I0731 12:15:26.539751    7320 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0731 12:15:26.542784    7320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0731 12:15:26.545796    7320 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
I0731 12:15:26.548766    7320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0731 12:15:26.552005    7320 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:15:26.552057    7320 driver.go:392] Setting default libvirt URI to qemu:///system
I0731 12:15:26.558711    7320 out.go:177] * Using the qemu2 driver based on existing profile
I0731 12:15:26.567731    7320 start.go:297] selected driver: qemu2
I0731 12:15:26.567740    7320 start.go:901] validating driver "qemu2" against &{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0731 12:15:26.567810    7320 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0731 12:15:26.570166    7320 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0731 12:15:26.570186    7320 cni.go:84] Creating CNI manager for ""
I0731 12:15:26.570196    7320 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0731 12:15:26.570238    7320 start.go:340] cluster config:
{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-373000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0731 12:15:26.573913    7320 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0731 12:15:26.582746    7320 out.go:177] * Starting "functional-373000" primary control-plane node in "functional-373000" cluster
I0731 12:15:26.588725    7320 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0731 12:15:26.588742    7320 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0731 12:15:26.588752    7320 cache.go:56] Caching tarball of preloaded images
I0731 12:15:26.588820    7320 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0731 12:15:26.588824    7320 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0731 12:15:26.588881    7320 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/functional-373000/config.json ...
I0731 12:15:26.589723    7320 start.go:360] acquireMachinesLock for functional-373000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 12:15:26.589759    7320 start.go:364] duration metric: took 30.959µs to acquireMachinesLock for "functional-373000"
I0731 12:15:26.589766    7320 start.go:96] Skipping create...Using existing machine configuration
I0731 12:15:26.589771    7320 fix.go:54] fixHost starting: 
I0731 12:15:26.589891    7320 fix.go:112] recreateIfNeeded on functional-373000: state=Stopped err=<nil>
W0731 12:15:26.589898    7320 fix.go:138] unexpected machine state, will restart: <nil>
I0731 12:15:26.595783    7320 out.go:177] * Restarting existing qemu2 VM for "functional-373000" ...
I0731 12:15:26.598740    7320 qemu.go:418] Using hvf for hardware acceleration
I0731 12:15:26.598777    7320 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:20:c8:2d:e7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/disk.qcow2
I0731 12:15:26.600816    7320 main.go:141] libmachine: STDOUT: 
I0731 12:15:26.600833    7320 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 12:15:26.600862    7320 fix.go:56] duration metric: took 11.092375ms for fixHost
I0731 12:15:26.600864    7320 start.go:83] releasing machines lock for "functional-373000", held for 11.102584ms
W0731 12:15:26.600871    7320 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 12:15:26.600910    7320 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 12:15:26.600914    7320 start.go:729] Will try again in 5 seconds ...
I0731 12:15:31.603016    7320 start.go:360] acquireMachinesLock for functional-373000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 12:15:31.603495    7320 start.go:364] duration metric: took 393.209µs to acquireMachinesLock for "functional-373000"
I0731 12:15:31.603678    7320 start.go:96] Skipping create...Using existing machine configuration
I0731 12:15:31.603692    7320 fix.go:54] fixHost starting: 
I0731 12:15:31.604450    7320 fix.go:112] recreateIfNeeded on functional-373000: state=Stopped err=<nil>
W0731 12:15:31.604469    7320 fix.go:138] unexpected machine state, will restart: <nil>
I0731 12:15:31.608034    7320 out.go:177] * Restarting existing qemu2 VM for "functional-373000" ...
I0731 12:15:31.613806    7320 qemu.go:418] Using hvf for hardware acceleration
I0731 12:15:31.614041    7320 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:20:c8:2d:e7:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/functional-373000/disk.qcow2
I0731 12:15:31.624084    7320 main.go:141] libmachine: STDOUT: 
I0731 12:15:31.624143    7320 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 12:15:31.624249    7320 fix.go:56] duration metric: took 20.558667ms for fixHost
I0731 12:15:31.624262    7320 start.go:83] releasing machines lock for "functional-373000", held for 20.750334ms
W0731 12:15:31.624450    7320 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-373000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 12:15:31.633844    7320 out.go:177] 
W0731 12:15:31.637934    7320 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 12:15:31.637990    7320 out.go:239] * 
W0731 12:15:31.640540    7320 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 12:15:31.647845    7320 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-373000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-373000 apply -f testdata/invalidsvc.yaml: exit status 1 (26.374208ms)

                                                
                                                
** stderr ** 
	error: context "functional-373000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-373000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-373000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-373000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-373000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-373000 --alsologtostderr -v=1] stderr:
I0731 12:16:11.713858    7634 out.go:291] Setting OutFile to fd 1 ...
I0731 12:16:11.714261    7634 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:11.714264    7634 out.go:304] Setting ErrFile to fd 2...
I0731 12:16:11.714267    7634 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:11.714414    7634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
I0731 12:16:11.714653    7634 mustload.go:65] Loading cluster: functional-373000
I0731 12:16:11.714850    7634 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:16:11.718147    7634 out.go:177] * The control-plane node functional-373000 host is not running: state=Stopped
I0731 12:16:11.721911    7634 out.go:177]   To start a cluster, run: "minikube start -p functional-373000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (41.690166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 status: exit status 7 (29.925167ms)

                                                
                                                
-- stdout --
	functional-373000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-373000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.315167ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-373000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 status -o json: exit status 7 (30.190875ms)

                                                
                                                
-- stdout --
	{"Name":"functional-373000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-373000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (28.840667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-373000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-373000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.381708ms)

                                                
                                                
** stderr ** 
	error: context "functional-373000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-373000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-373000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-373000 describe po hello-node-connect: exit status 1 (26.216041ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-373000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-373000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-373000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-373000 logs -l app=hello-node-connect: exit status 1 (25.577583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-373000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-373000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-373000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-373000 describe svc hello-node-connect: exit status 1 (25.644459ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-373000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-373000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (29.678166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-373000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (29.550042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "echo hello": exit status 83 (40.58275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-373000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-373000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-373000\"\n"*. args "out/minikube-darwin-arm64 -p functional-373000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "cat /etc/hostname": exit status 83 (39.972542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-373000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-373000"- but got *"* The control-plane node functional-373000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-373000\"\n"*. args "out/minikube-darwin-arm64 -p functional-373000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (30.207667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (52.722333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-373000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh -n functional-373000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh -n functional-373000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.812291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-373000 ssh -n functional-373000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-373000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-373000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 cp functional-373000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd477850634/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 cp functional-373000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd477850634/001/cp-test.txt: exit status 83 (47.583208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-373000 cp functional-373000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd477850634/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh -n functional-373000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh -n functional-373000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.976541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-373000 ssh -n functional-373000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd477850634/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-373000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-373000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (48.714292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-373000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh -n functional-373000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh -n functional-373000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (38.943917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-373000 ssh -n functional-373000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-373000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-373000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6986/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /etc/test/nested/copy/6986/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /etc/test/nested/copy/6986/hosts": exit status 83 (45.623667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /etc/test/nested/copy/6986/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-373000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-373000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (30.112208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6986.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/6986.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/6986.pem": exit status 83 (40.569708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/6986.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-373000 ssh \"sudo cat /etc/ssl/certs/6986.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6986.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-373000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-373000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6986.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /usr/share/ca-certificates/6986.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /usr/share/ca-certificates/6986.pem": exit status 83 (39.664458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/6986.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-373000 ssh \"sudo cat /usr/share/ca-certificates/6986.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6986.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-373000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-373000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (43.791792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-373000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-373000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-373000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/69862.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/69862.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/69862.pem": exit status 83 (40.668458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/69862.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-373000 ssh \"sudo cat /etc/ssl/certs/69862.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/69862.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-373000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-373000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/69862.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /usr/share/ca-certificates/69862.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /usr/share/ca-certificates/69862.pem": exit status 83 (44.694375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/69862.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-373000 ssh \"sudo cat /usr/share/ca-certificates/69862.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/69862.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-373000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-373000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (38.701208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-373000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-373000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-373000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (29.0345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-373000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-373000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.007041ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-373000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-373000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-373000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-373000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-373000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-373000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-373000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-373000 -n functional-373000: exit status 7 (29.7445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-373000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "sudo systemctl is-active crio": exit status 83 (38.672458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-373000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-373000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 version -o=json --components: exit status 83 (40.974916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-373000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-373000 image ls --format short --alsologtostderr:
I0731 12:16:12.148339    7651 out.go:291] Setting OutFile to fd 1 ...
I0731 12:16:12.148488    7651 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:12.148492    7651 out.go:304] Setting ErrFile to fd 2...
I0731 12:16:12.148494    7651 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:12.148636    7651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
I0731 12:16:12.149044    7651 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:16:12.149110    7651 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-373000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-373000 image ls --format table --alsologtostderr:
I0731 12:16:12.217239    7655 out.go:291] Setting OutFile to fd 1 ...
I0731 12:16:12.217389    7655 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:12.217393    7655 out.go:304] Setting ErrFile to fd 2...
I0731 12:16:12.217395    7655 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:12.217525    7655 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
I0731 12:16:12.217940    7655 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:16:12.218003    7655 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-373000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-373000 image ls --format json --alsologtostderr:
I0731 12:16:12.183203    7653 out.go:291] Setting OutFile to fd 1 ...
I0731 12:16:12.183351    7653 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:12.183354    7653 out.go:304] Setting ErrFile to fd 2...
I0731 12:16:12.183367    7653 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:12.183510    7653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
I0731 12:16:12.183938    7653 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:16:12.183999    7653 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-373000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-373000 image ls --format yaml --alsologtostderr:
I0731 12:16:12.112670    7649 out.go:291] Setting OutFile to fd 1 ...
I0731 12:16:12.112812    7649 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:12.112815    7649 out.go:304] Setting ErrFile to fd 2...
I0731 12:16:12.112817    7649 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:12.112960    7649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
I0731 12:16:12.113369    7649 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:16:12.113426    7649 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh pgrep buildkitd: exit status 83 (41.864125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image build -t localhost/my-image:functional-373000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-373000 image build -t localhost/my-image:functional-373000 testdata/build --alsologtostderr:
I0731 12:16:12.294223    7659 out.go:291] Setting OutFile to fd 1 ...
I0731 12:16:12.294774    7659 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:12.294778    7659 out.go:304] Setting ErrFile to fd 2...
I0731 12:16:12.294780    7659 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:16:12.294938    7659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
I0731 12:16:12.295369    7659 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:16:12.295804    7659 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:16:12.296038    7659 build_images.go:133] succeeded building to: 
I0731 12:16:12.296042    7659 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image ls
functional_test.go:442: expected "localhost/my-image:functional-373000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-373000 docker-env) && out/minikube-darwin-arm64 status -p functional-373000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-373000 docker-env) && out/minikube-darwin-arm64 status -p functional-373000": exit status 1 (47.77925ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 update-context --alsologtostderr -v=2: exit status 83 (40.832584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:16:11.985975    7643 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:16:11.986350    7643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:11.986353    7643 out.go:304] Setting ErrFile to fd 2...
	I0731 12:16:11.986356    7643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:11.986521    7643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:16:11.986794    7643 mustload.go:65] Loading cluster: functional-373000
	I0731 12:16:11.986984    7643 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:16:11.990573    7643 out.go:177] * The control-plane node functional-373000 host is not running: state=Stopped
	I0731 12:16:11.994470    7643 out.go:177]   To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-373000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-373000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-373000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 update-context --alsologtostderr -v=2: exit status 83 (42.70725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:16:12.069528    7647 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:16:12.069656    7647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:12.069659    7647 out.go:304] Setting ErrFile to fd 2...
	I0731 12:16:12.069661    7647 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:12.069776    7647 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:16:12.069997    7647 mustload.go:65] Loading cluster: functional-373000
	I0731 12:16:12.070173    7647 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:16:12.075476    7647 out.go:177] * The control-plane node functional-373000 host is not running: state=Stopped
	I0731 12:16:12.079401    7647 out.go:177]   To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-373000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-373000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-373000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 update-context --alsologtostderr -v=2: exit status 83 (41.588666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:16:12.028250    7645 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:16:12.028404    7645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:12.028407    7645 out.go:304] Setting ErrFile to fd 2...
	I0731 12:16:12.028409    7645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:12.028554    7645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:16:12.028794    7645 mustload.go:65] Loading cluster: functional-373000
	I0731 12:16:12.028997    7645 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:16:12.033447    7645 out.go:177] * The control-plane node functional-373000 host is not running: state=Stopped
	I0731 12:16:12.037458    7645 out.go:177]   To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-373000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-373000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-373000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-373000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-373000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (25.813ms)

                                                
                                                
** stderr ** 
	error: context "functional-373000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-373000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 service list: exit status 83 (43.470792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-373000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-373000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-373000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 service list -o json: exit status 83 (42.757209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-373000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 service --namespace=default --https --url hello-node: exit status 83 (46.747625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-373000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 service hello-node --url --format={{.IP}}: exit status 83 (40.845ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-373000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-373000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-373000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 service hello-node --url: exit status 83 (41.899375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-373000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test.go:1565: failed to parse "* The control-plane node functional-373000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-373000\"": parse "* The control-plane node functional-373000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-373000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-373000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-373000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0731 12:15:33.516893    7440 out.go:291] Setting OutFile to fd 1 ...
I0731 12:15:33.517043    7440 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:15:33.517047    7440 out.go:304] Setting ErrFile to fd 2...
I0731 12:15:33.517049    7440 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:15:33.517180    7440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
I0731 12:15:33.517411    7440 mustload.go:65] Loading cluster: functional-373000
I0731 12:15:33.517622    7440 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:15:33.521627    7440 out.go:177] * The control-plane node functional-373000 host is not running: state=Stopped
I0731 12:15:33.532556    7440 out.go:177]   To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
stdout: * The control-plane node functional-373000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-373000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-373000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7441: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-373000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-373000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-373000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-373000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-373000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-373000": client config: context "functional-373000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-373000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-373000 get svc nginx-svc: exit status 1 (72.508459ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-373000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-373000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image load --daemon docker.io/kicbase/echo-server:functional-373000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-373000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image load --daemon docker.io/kicbase/echo-server:functional-373000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-373000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-373000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image load --daemon docker.io/kicbase/echo-server:functional-373000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-373000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image save docker.io/kicbase/echo-server:functional-373000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-373000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.035493375s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 17 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (25.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (25.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-005000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-005000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.947377083s)

                                                
                                                
-- stdout --
	* [ha-005000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-005000" primary control-plane node in "ha-005000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-005000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:18:12.250592    7697 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:18:12.250719    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:18:12.250722    7697 out.go:304] Setting ErrFile to fd 2...
	I0731 12:18:12.250724    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:18:12.250848    7697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:18:12.252038    7697 out.go:298] Setting JSON to false
	I0731 12:18:12.268001    7697 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4655,"bootTime":1722448837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:18:12.268096    7697 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:18:12.275108    7697 out.go:177] * [ha-005000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:18:12.282042    7697 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:18:12.282095    7697 notify.go:220] Checking for updates...
	I0731 12:18:12.288968    7697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:18:12.292010    7697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:18:12.295020    7697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:18:12.297950    7697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:18:12.300996    7697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:18:12.304186    7697 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:18:12.307970    7697 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:18:12.315016    7697 start.go:297] selected driver: qemu2
	I0731 12:18:12.315023    7697 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:18:12.315029    7697 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:18:12.317185    7697 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:18:12.320008    7697 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:18:12.323110    7697 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:18:12.323127    7697 cni.go:84] Creating CNI manager for ""
	I0731 12:18:12.323138    7697 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 12:18:12.323142    7697 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 12:18:12.323176    7697 start.go:340] cluster config:
	{Name:ha-005000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-005000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:18:12.326826    7697 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:18:12.332993    7697 out.go:177] * Starting "ha-005000" primary control-plane node in "ha-005000" cluster
	I0731 12:18:12.336986    7697 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:18:12.337005    7697 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:18:12.337012    7697 cache.go:56] Caching tarball of preloaded images
	I0731 12:18:12.337069    7697 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:18:12.337074    7697 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:18:12.337271    7697 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/ha-005000/config.json ...
	I0731 12:18:12.337286    7697 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/ha-005000/config.json: {Name:mk9d69767471bcbeea5f54026b7e40ed0acf1ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:18:12.337570    7697 start.go:360] acquireMachinesLock for ha-005000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:18:12.337603    7697 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "ha-005000"
	I0731 12:18:12.337613    7697 start.go:93] Provisioning new machine with config: &{Name:ha-005000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-005000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:18:12.337641    7697 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:18:12.345974    7697 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:18:12.363547    7697 start.go:159] libmachine.API.Create for "ha-005000" (driver="qemu2")
	I0731 12:18:12.363579    7697 client.go:168] LocalClient.Create starting
	I0731 12:18:12.363645    7697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:18:12.363678    7697 main.go:141] libmachine: Decoding PEM data...
	I0731 12:18:12.363691    7697 main.go:141] libmachine: Parsing certificate...
	I0731 12:18:12.363727    7697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:18:12.363751    7697 main.go:141] libmachine: Decoding PEM data...
	I0731 12:18:12.363760    7697 main.go:141] libmachine: Parsing certificate...
	I0731 12:18:12.364242    7697 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:18:12.519594    7697 main.go:141] libmachine: Creating SSH key...
	I0731 12:18:12.658469    7697 main.go:141] libmachine: Creating Disk image...
	I0731 12:18:12.658475    7697 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:18:12.658682    7697 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2
	I0731 12:18:12.668186    7697 main.go:141] libmachine: STDOUT: 
	I0731 12:18:12.668208    7697 main.go:141] libmachine: STDERR: 
	I0731 12:18:12.668248    7697 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2 +20000M
	I0731 12:18:12.675973    7697 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:18:12.675988    7697 main.go:141] libmachine: STDERR: 
	I0731 12:18:12.675999    7697 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2
	I0731 12:18:12.676003    7697 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:18:12.676021    7697 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:18:12.676046    7697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:87:96:88:c6:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2
	I0731 12:18:12.677614    7697 main.go:141] libmachine: STDOUT: 
	I0731 12:18:12.677630    7697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:18:12.677647    7697 client.go:171] duration metric: took 314.074167ms to LocalClient.Create
	I0731 12:18:14.679763    7697 start.go:128] duration metric: took 2.342179125s to createHost
	I0731 12:18:14.679872    7697 start.go:83] releasing machines lock for "ha-005000", held for 2.342300583s
	W0731 12:18:14.679946    7697 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:18:14.692240    7697 out.go:177] * Deleting "ha-005000" in qemu2 ...
	W0731 12:18:14.721762    7697 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:18:14.721784    7697 start.go:729] Will try again in 5 seconds ...
	I0731 12:18:19.723815    7697 start.go:360] acquireMachinesLock for ha-005000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:18:19.724260    7697 start.go:364] duration metric: took 374.375µs to acquireMachinesLock for "ha-005000"
	I0731 12:18:19.724377    7697 start.go:93] Provisioning new machine with config: &{Name:ha-005000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-005000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:18:19.724684    7697 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:18:19.740425    7697 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:18:19.790047    7697 start.go:159] libmachine.API.Create for "ha-005000" (driver="qemu2")
	I0731 12:18:19.790097    7697 client.go:168] LocalClient.Create starting
	I0731 12:18:19.790219    7697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:18:19.790298    7697 main.go:141] libmachine: Decoding PEM data...
	I0731 12:18:19.790314    7697 main.go:141] libmachine: Parsing certificate...
	I0731 12:18:19.790376    7697 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:18:19.790421    7697 main.go:141] libmachine: Decoding PEM data...
	I0731 12:18:19.790442    7697 main.go:141] libmachine: Parsing certificate...
	I0731 12:18:19.791130    7697 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:18:19.955338    7697 main.go:141] libmachine: Creating SSH key...
	I0731 12:18:20.102590    7697 main.go:141] libmachine: Creating Disk image...
	I0731 12:18:20.102600    7697 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:18:20.103113    7697 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2
	I0731 12:18:20.112791    7697 main.go:141] libmachine: STDOUT: 
	I0731 12:18:20.112812    7697 main.go:141] libmachine: STDERR: 
	I0731 12:18:20.112868    7697 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2 +20000M
	I0731 12:18:20.120710    7697 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:18:20.120723    7697 main.go:141] libmachine: STDERR: 
	I0731 12:18:20.120736    7697 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2
	I0731 12:18:20.120829    7697 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:18:20.120839    7697 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:18:20.120880    7697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:2f:03:fb:78:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2
	I0731 12:18:20.122484    7697 main.go:141] libmachine: STDOUT: 
	I0731 12:18:20.122498    7697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:18:20.122514    7697 client.go:171] duration metric: took 332.420791ms to LocalClient.Create
	I0731 12:18:22.124665    7697 start.go:128] duration metric: took 2.400032792s to createHost
	I0731 12:18:22.124728    7697 start.go:83] releasing machines lock for "ha-005000", held for 2.400525834s
	W0731 12:18:22.125579    7697 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-005000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-005000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:18:22.136197    7697 out.go:177] 
	W0731 12:18:22.145389    7697 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:18:22.145430    7697 out.go:239] * 
	* 
	W0731 12:18:22.148408    7697 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:18:22.155275    7697 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-005000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (66.710542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (79.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.63475ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-005000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- rollout status deployment/busybox: exit status 1 (55.91975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.436334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.355291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.045459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.789542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.989542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.750125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.400709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.491ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.615417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.041291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.806584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.842333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.151792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.030292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (29.633166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (79.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-005000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.436375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-005000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (29.79075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-005000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-005000 -v=7 --alsologtostderr: exit status 83 (41.380708ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-005000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-005000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:41.611943    7782 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:41.612495    7782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:41.612499    7782 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:41.612501    7782 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:41.612638    7782 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:19:41.612919    7782 mustload.go:65] Loading cluster: ha-005000
	I0731 12:19:41.613109    7782 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:19:41.617654    7782 out.go:177] * The control-plane node ha-005000 host is not running: state=Stopped
	I0731 12:19:41.621390    7782 out.go:177]   To start a cluster, run: "minikube start -p ha-005000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-005000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (29.716209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-005000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-005000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.4175ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-005000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-005000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-005000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (30.166625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-005000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-005000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-005000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-005000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-005000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-005000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-005000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-005000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (29.926334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status --output json -v=7 --alsologtostderr: exit status 7 (29.893833ms)

                                                
                                                
-- stdout --
	{"Name":"ha-005000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:41.816380    7794 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:41.816553    7794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:41.816557    7794 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:41.816559    7794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:41.816673    7794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:19:41.816800    7794 out.go:298] Setting JSON to true
	I0731 12:19:41.816819    7794 mustload.go:65] Loading cluster: ha-005000
	I0731 12:19:41.816855    7794 notify.go:220] Checking for updates...
	I0731 12:19:41.817023    7794 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:19:41.817030    7794 status.go:255] checking status of ha-005000 ...
	I0731 12:19:41.817245    7794 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:19:41.817248    7794 status.go:343] host is not running, skipping remaining checks
	I0731 12:19:41.817251    7794 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-005000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (29.915458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 node stop m02 -v=7 --alsologtostderr: exit status 85 (47.164834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:41.876890    7798 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:41.877297    7798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:41.877301    7798 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:41.877304    7798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:41.877459    7798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:19:41.877694    7798 mustload.go:65] Loading cluster: ha-005000
	I0731 12:19:41.877873    7798 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:19:41.882135    7798 out.go:177] 
	W0731 12:19:41.885097    7798 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0731 12:19:41.885102    7798 out.go:239] * 
	* 
	W0731 12:19:41.886995    7798 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:19:41.891082    7798 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-005000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (29.620667ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:41.923977    7800 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:41.924112    7800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:41.924115    7800 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:41.924118    7800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:41.924241    7800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:19:41.924356    7800 out.go:298] Setting JSON to false
	I0731 12:19:41.924365    7800 mustload.go:65] Loading cluster: ha-005000
	I0731 12:19:41.924432    7800 notify.go:220] Checking for updates...
	I0731 12:19:41.924550    7800 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:19:41.924559    7800 status.go:255] checking status of ha-005000 ...
	I0731 12:19:41.924800    7800 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:19:41.924804    7800 status.go:343] host is not running, skipping remaining checks
	I0731 12:19:41.924806    7800 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr": ha-005000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr": ha-005000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr": ha-005000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr": ha-005000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (29.477167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-005000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-005000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-005000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-005000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (30.115416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.414792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:42.057533    7809 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:42.058280    7809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:42.058283    7809 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:42.058286    7809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:42.058425    7809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:19:42.058642    7809 mustload.go:65] Loading cluster: ha-005000
	I0731 12:19:42.058834    7809 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:19:42.063068    7809 out.go:177] 
	W0731 12:19:42.067079    7809 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0731 12:19:42.067088    7809 out.go:239] * 
	* 
	W0731 12:19:42.068970    7809 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:19:42.073030    7809 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0731 12:19:42.057533    7809 out.go:291] Setting OutFile to fd 1 ...
I0731 12:19:42.058280    7809 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:19:42.058283    7809 out.go:304] Setting ErrFile to fd 2...
I0731 12:19:42.058286    7809 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:19:42.058425    7809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
I0731 12:19:42.058642    7809 mustload.go:65] Loading cluster: ha-005000
I0731 12:19:42.058834    7809 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:19:42.063068    7809 out.go:177] 
W0731 12:19:42.067079    7809 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0731 12:19:42.067088    7809 out.go:239] * 
* 
W0731 12:19:42.068970    7809 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 12:19:42.073030    7809 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-005000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (30.176625ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:42.106603    7811 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:42.106755    7811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:42.106758    7811 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:42.106760    7811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:42.106895    7811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:19:42.107021    7811 out.go:298] Setting JSON to false
	I0731 12:19:42.107030    7811 mustload.go:65] Loading cluster: ha-005000
	I0731 12:19:42.107088    7811 notify.go:220] Checking for updates...
	I0731 12:19:42.107223    7811 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:19:42.107229    7811 status.go:255] checking status of ha-005000 ...
	I0731 12:19:42.107442    7811 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:19:42.107446    7811 status.go:343] host is not running, skipping remaining checks
	I0731 12:19:42.107449    7811 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (73.434042ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:42.852237    7815 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:42.852692    7815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:42.852698    7815 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:42.852702    7815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:42.852983    7815 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:19:42.853180    7815 out.go:298] Setting JSON to false
	I0731 12:19:42.853193    7815 mustload.go:65] Loading cluster: ha-005000
	I0731 12:19:42.853256    7815 notify.go:220] Checking for updates...
	I0731 12:19:42.853713    7815 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:19:42.853725    7815 status.go:255] checking status of ha-005000 ...
	I0731 12:19:42.853982    7815 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:19:42.853988    7815 status.go:343] host is not running, skipping remaining checks
	I0731 12:19:42.853991    7815 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (73.399209ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:44.392476    7820 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:44.392678    7820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:44.392682    7820 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:44.392685    7820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:44.392865    7820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:19:44.393030    7820 out.go:298] Setting JSON to false
	I0731 12:19:44.393040    7820 mustload.go:65] Loading cluster: ha-005000
	I0731 12:19:44.393071    7820 notify.go:220] Checking for updates...
	I0731 12:19:44.393291    7820 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:19:44.393300    7820 status.go:255] checking status of ha-005000 ...
	I0731 12:19:44.393616    7820 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:19:44.393620    7820 status.go:343] host is not running, skipping remaining checks
	I0731 12:19:44.393623    7820 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (73.461917ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:46.305356    7822 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:46.305567    7822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:46.305571    7822 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:46.305574    7822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:46.305758    7822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:19:46.305916    7822 out.go:298] Setting JSON to false
	I0731 12:19:46.305928    7822 mustload.go:65] Loading cluster: ha-005000
	I0731 12:19:46.305966    7822 notify.go:220] Checking for updates...
	I0731 12:19:46.306184    7822 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:19:46.306193    7822 status.go:255] checking status of ha-005000 ...
	I0731 12:19:46.306459    7822 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:19:46.306465    7822 status.go:343] host is not running, skipping remaining checks
	I0731 12:19:46.306467    7822 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (71.795084ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:51.343311    7827 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:51.343509    7827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:51.343516    7827 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:51.343519    7827 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:51.343688    7827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:19:51.343856    7827 out.go:298] Setting JSON to false
	I0731 12:19:51.343867    7827 mustload.go:65] Loading cluster: ha-005000
	I0731 12:19:51.343912    7827 notify.go:220] Checking for updates...
	I0731 12:19:51.344132    7827 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:19:51.344140    7827 status.go:255] checking status of ha-005000 ...
	I0731 12:19:51.344403    7827 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:19:51.344408    7827 status.go:343] host is not running, skipping remaining checks
	I0731 12:19:51.344411    7827 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (71.759416ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:19:54.966146    7829 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:19:54.966365    7829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:54.966370    7829 out.go:304] Setting ErrFile to fd 2...
	I0731 12:19:54.966374    7829 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:19:54.966555    7829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:19:54.966714    7829 out.go:298] Setting JSON to false
	I0731 12:19:54.966725    7829 mustload.go:65] Loading cluster: ha-005000
	I0731 12:19:54.966769    7829 notify.go:220] Checking for updates...
	I0731 12:19:54.966976    7829 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:19:54.966985    7829 status.go:255] checking status of ha-005000 ...
	I0731 12:19:54.967253    7829 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:19:54.967258    7829 status.go:343] host is not running, skipping remaining checks
	I0731 12:19:54.967261    7829 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (73.269583ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:00.201528    7831 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:00.201733    7831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:00.201738    7831 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:00.201741    7831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:00.201914    7831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:20:00.202067    7831 out.go:298] Setting JSON to false
	I0731 12:20:00.202078    7831 mustload.go:65] Loading cluster: ha-005000
	I0731 12:20:00.202127    7831 notify.go:220] Checking for updates...
	I0731 12:20:00.202379    7831 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:00.202387    7831 status.go:255] checking status of ha-005000 ...
	I0731 12:20:00.202661    7831 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:20:00.202666    7831 status.go:343] host is not running, skipping remaining checks
	I0731 12:20:00.202669    7831 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (72.065709ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:06.958554    7833 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:06.958791    7833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:06.958796    7833 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:06.958799    7833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:06.958985    7833 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:20:06.959148    7833 out.go:298] Setting JSON to false
	I0731 12:20:06.959161    7833 mustload.go:65] Loading cluster: ha-005000
	I0731 12:20:06.959208    7833 notify.go:220] Checking for updates...
	I0731 12:20:06.959430    7833 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:06.959439    7833 status.go:255] checking status of ha-005000 ...
	I0731 12:20:06.959741    7833 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:20:06.959747    7833 status.go:343] host is not running, skipping remaining checks
	I0731 12:20:06.959750    7833 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (73.466292ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:26.162343    7836 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:26.162549    7836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:26.162554    7836 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:26.162557    7836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:26.162735    7836 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:20:26.162890    7836 out.go:298] Setting JSON to false
	I0731 12:20:26.162901    7836 mustload.go:65] Loading cluster: ha-005000
	I0731 12:20:26.162936    7836 notify.go:220] Checking for updates...
	I0731 12:20:26.163167    7836 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:26.163176    7836 status.go:255] checking status of ha-005000 ...
	I0731 12:20:26.163466    7836 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:20:26.163470    7836 status.go:343] host is not running, skipping remaining checks
	I0731 12:20:26.163474    7836 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (32.545916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (44.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-005000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-005000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-005000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-005000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-005000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-005000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-005000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-005000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (30.061292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-005000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-005000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-005000 -v=7 --alsologtostderr: (3.510851833s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-005000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-005000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.223888417s)

                                                
                                                
-- stdout --
	* [ha-005000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-005000" primary control-plane node in "ha-005000" cluster
	* Restarting existing qemu2 VM for "ha-005000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-005000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:29.874811    7865 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:29.874985    7865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:29.874990    7865 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:29.874993    7865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:29.875173    7865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:20:29.876361    7865 out.go:298] Setting JSON to false
	I0731 12:20:29.895665    7865 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4792,"bootTime":1722448837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:20:29.895733    7865 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:20:29.900437    7865 out.go:177] * [ha-005000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:20:29.906331    7865 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:20:29.906394    7865 notify.go:220] Checking for updates...
	I0731 12:20:29.913295    7865 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:20:29.916318    7865 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:20:29.919338    7865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:20:29.922239    7865 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:20:29.925285    7865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:20:29.928576    7865 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:29.928636    7865 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:20:29.932236    7865 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:20:29.939294    7865 start.go:297] selected driver: qemu2
	I0731 12:20:29.939300    7865 start.go:901] validating driver "qemu2" against &{Name:ha-005000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-005000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:20:29.939345    7865 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:20:29.941599    7865 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:20:29.941647    7865 cni.go:84] Creating CNI manager for ""
	I0731 12:20:29.941653    7865 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 12:20:29.941720    7865 start.go:340] cluster config:
	{Name:ha-005000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-005000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:20:29.945583    7865 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:20:29.956672    7865 out.go:177] * Starting "ha-005000" primary control-plane node in "ha-005000" cluster
	I0731 12:20:29.960364    7865 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:20:29.960385    7865 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:20:29.960403    7865 cache.go:56] Caching tarball of preloaded images
	I0731 12:20:29.960490    7865 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:20:29.960496    7865 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:20:29.960553    7865 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/ha-005000/config.json ...
	I0731 12:20:29.960980    7865 start.go:360] acquireMachinesLock for ha-005000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:20:29.961019    7865 start.go:364] duration metric: took 31.708µs to acquireMachinesLock for "ha-005000"
	I0731 12:20:29.961028    7865 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:20:29.961034    7865 fix.go:54] fixHost starting: 
	I0731 12:20:29.961164    7865 fix.go:112] recreateIfNeeded on ha-005000: state=Stopped err=<nil>
	W0731 12:20:29.961173    7865 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:20:29.965288    7865 out.go:177] * Restarting existing qemu2 VM for "ha-005000" ...
	I0731 12:20:29.973238    7865 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:20:29.973271    7865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:2f:03:fb:78:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2
	I0731 12:20:29.975466    7865 main.go:141] libmachine: STDOUT: 
	I0731 12:20:29.975488    7865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:20:29.975519    7865 fix.go:56] duration metric: took 14.487167ms for fixHost
	I0731 12:20:29.975525    7865 start.go:83] releasing machines lock for "ha-005000", held for 14.501542ms
	W0731 12:20:29.975533    7865 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:20:29.975567    7865 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:20:29.975573    7865 start.go:729] Will try again in 5 seconds ...
	I0731 12:20:34.977634    7865 start.go:360] acquireMachinesLock for ha-005000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:20:34.978057    7865 start.go:364] duration metric: took 324.042µs to acquireMachinesLock for "ha-005000"
	I0731 12:20:34.978177    7865 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:20:34.978199    7865 fix.go:54] fixHost starting: 
	I0731 12:20:34.978858    7865 fix.go:112] recreateIfNeeded on ha-005000: state=Stopped err=<nil>
	W0731 12:20:34.978887    7865 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:20:34.987356    7865 out.go:177] * Restarting existing qemu2 VM for "ha-005000" ...
	I0731 12:20:34.991322    7865 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:20:34.991553    7865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:2f:03:fb:78:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2
	I0731 12:20:35.000373    7865 main.go:141] libmachine: STDOUT: 
	I0731 12:20:35.000426    7865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:20:35.000495    7865 fix.go:56] duration metric: took 22.298833ms for fixHost
	I0731 12:20:35.000511    7865 start.go:83] releasing machines lock for "ha-005000", held for 22.434ms
	W0731 12:20:35.000637    7865 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-005000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-005000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:20:35.008418    7865 out.go:177] 
	W0731 12:20:35.012379    7865 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:20:35.012403    7865 out.go:239] * 
	* 
	W0731 12:20:35.014699    7865 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:20:35.022386    7865 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-005000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-005000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (32.567958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.823125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-005000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-005000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:35.165424    7877 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:35.165836    7877 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:35.165840    7877 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:35.165842    7877 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:35.166001    7877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:20:35.166212    7877 mustload.go:65] Loading cluster: ha-005000
	I0731 12:20:35.166384    7877 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:35.170728    7877 out.go:177] * The control-plane node ha-005000 host is not running: state=Stopped
	I0731 12:20:35.173627    7877 out.go:177]   To start a cluster, run: "minikube start -p ha-005000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-005000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (30.235458ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:35.206063    7879 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:35.206201    7879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:35.206204    7879 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:35.206206    7879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:35.206366    7879 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:20:35.206489    7879 out.go:298] Setting JSON to false
	I0731 12:20:35.206498    7879 mustload.go:65] Loading cluster: ha-005000
	I0731 12:20:35.206564    7879 notify.go:220] Checking for updates...
	I0731 12:20:35.206697    7879 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:35.206710    7879 status.go:255] checking status of ha-005000 ...
	I0731 12:20:35.206918    7879 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:20:35.206922    7879 status.go:343] host is not running, skipping remaining checks
	I0731 12:20:35.206924    7879 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (29.108041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-005000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-005000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-005000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-005000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (30.058917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-005000 stop -v=7 --alsologtostderr: (3.844789292s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr: exit status 7 (66.807083ms)

                                                
                                                
-- stdout --
	ha-005000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:39.221467    7911 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:39.221692    7911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:39.221696    7911 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:39.221699    7911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:39.221871    7911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:20:39.222034    7911 out.go:298] Setting JSON to false
	I0731 12:20:39.222045    7911 mustload.go:65] Loading cluster: ha-005000
	I0731 12:20:39.222082    7911 notify.go:220] Checking for updates...
	I0731 12:20:39.222327    7911 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:39.222336    7911 status.go:255] checking status of ha-005000 ...
	I0731 12:20:39.222626    7911 status.go:330] ha-005000 host status = "Stopped" (err=<nil>)
	I0731 12:20:39.222631    7911 status.go:343] host is not running, skipping remaining checks
	I0731 12:20:39.222634    7911 status.go:257] ha-005000 status: &{Name:ha-005000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr": ha-005000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr": ha-005000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-005000 status -v=7 --alsologtostderr": ha-005000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (32.18ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-005000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-005000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.177548583s)

                                                
                                                
-- stdout --
	* [ha-005000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-005000" primary control-plane node in "ha-005000" cluster
	* Restarting existing qemu2 VM for "ha-005000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-005000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:39.284056    7915 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:39.284181    7915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:39.284184    7915 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:39.284187    7915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:39.284326    7915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:20:39.285362    7915 out.go:298] Setting JSON to false
	I0731 12:20:39.301256    7915 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4802,"bootTime":1722448837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:20:39.301320    7915 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:20:39.306634    7915 out.go:177] * [ha-005000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:20:39.313522    7915 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:20:39.313582    7915 notify.go:220] Checking for updates...
	I0731 12:20:39.320525    7915 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:20:39.323489    7915 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:20:39.326515    7915 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:20:39.329512    7915 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:20:39.330851    7915 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:20:39.333754    7915 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:39.334032    7915 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:20:39.338510    7915 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:20:39.339768    7915 start.go:297] selected driver: qemu2
	I0731 12:20:39.339774    7915 start.go:901] validating driver "qemu2" against &{Name:ha-005000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-005000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:20:39.339822    7915 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:20:39.342106    7915 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:20:39.342154    7915 cni.go:84] Creating CNI manager for ""
	I0731 12:20:39.342158    7915 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 12:20:39.342204    7915 start.go:340] cluster config:
	{Name:ha-005000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-005000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:20:39.345697    7915 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:20:39.353541    7915 out.go:177] * Starting "ha-005000" primary control-plane node in "ha-005000" cluster
	I0731 12:20:39.357468    7915 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:20:39.357481    7915 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:20:39.357491    7915 cache.go:56] Caching tarball of preloaded images
	I0731 12:20:39.357549    7915 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:20:39.357554    7915 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:20:39.357601    7915 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/ha-005000/config.json ...
	I0731 12:20:39.357993    7915 start.go:360] acquireMachinesLock for ha-005000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:20:39.358026    7915 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "ha-005000"
	I0731 12:20:39.358033    7915 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:20:39.358039    7915 fix.go:54] fixHost starting: 
	I0731 12:20:39.358154    7915 fix.go:112] recreateIfNeeded on ha-005000: state=Stopped err=<nil>
	W0731 12:20:39.358162    7915 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:20:39.366485    7915 out.go:177] * Restarting existing qemu2 VM for "ha-005000" ...
	I0731 12:20:39.370489    7915 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:20:39.370523    7915 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:2f:03:fb:78:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2
	I0731 12:20:39.372382    7915 main.go:141] libmachine: STDOUT: 
	I0731 12:20:39.372400    7915 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:20:39.372423    7915 fix.go:56] duration metric: took 14.386875ms for fixHost
	I0731 12:20:39.372426    7915 start.go:83] releasing machines lock for "ha-005000", held for 14.396958ms
	W0731 12:20:39.372433    7915 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:20:39.372470    7915 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:20:39.372474    7915 start.go:729] Will try again in 5 seconds ...
	I0731 12:20:44.374475    7915 start.go:360] acquireMachinesLock for ha-005000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:20:44.374961    7915 start.go:364] duration metric: took 378.792µs to acquireMachinesLock for "ha-005000"
	I0731 12:20:44.375147    7915 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:20:44.375168    7915 fix.go:54] fixHost starting: 
	I0731 12:20:44.375886    7915 fix.go:112] recreateIfNeeded on ha-005000: state=Stopped err=<nil>
	W0731 12:20:44.375917    7915 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:20:44.380409    7915 out.go:177] * Restarting existing qemu2 VM for "ha-005000" ...
	I0731 12:20:44.388430    7915 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:20:44.388660    7915 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:2f:03:fb:78:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/ha-005000/disk.qcow2
	I0731 12:20:44.397547    7915 main.go:141] libmachine: STDOUT: 
	I0731 12:20:44.397597    7915 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:20:44.397689    7915 fix.go:56] duration metric: took 22.524833ms for fixHost
	I0731 12:20:44.397707    7915 start.go:83] releasing machines lock for "ha-005000", held for 22.668958ms
	W0731 12:20:44.397864    7915 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-005000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-005000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:20:44.405303    7915 out.go:177] 
	W0731 12:20:44.409347    7915 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:20:44.409365    7915 out.go:239] * 
	* 
	W0731 12:20:44.411794    7915 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:20:44.420285    7915 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-005000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (68.468458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-005000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-005000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-005000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-005000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (29.722875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-005000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-005000 --control-plane -v=7 --alsologtostderr: exit status 83 (39.373583ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-005000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-005000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:20:44.608047    7933 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:20:44.608294    7933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:44.608297    7933 out.go:304] Setting ErrFile to fd 2...
	I0731 12:20:44.608299    7933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:20:44.608421    7933 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:20:44.608668    7933 mustload.go:65] Loading cluster: ha-005000
	I0731 12:20:44.608856    7933 config.go:182] Loaded profile config "ha-005000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:20:44.613528    7933 out.go:177] * The control-plane node ha-005000 host is not running: state=Stopped
	I0731 12:20:44.617547    7933 out.go:177]   To start a cluster, run: "minikube start -p ha-005000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-005000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (29.562083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-005000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-005000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-005000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-005000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-005000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-005000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-005000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-005000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-005000 -n ha-005000: exit status 7 (29.405959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-005000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-454000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-454000 --driver=qemu2 : exit status 80 (9.845317125s)

                                                
                                                
-- stdout --
	* [image-454000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-454000" primary control-plane node in "image-454000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-454000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-454000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-454000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-454000 -n image-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-454000 -n image-454000: exit status 7 (67.632333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-827000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-827000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.812359625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e9250682-aa7f-4c2c-9592-643f854b44de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-827000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b12e0d6e-fb1a-4f58-95e1-6b917f2d8c94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19355"}}
	{"specversion":"1.0","id":"ce8c400e-267b-47b3-9cbd-775c0403a215","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig"}}
	{"specversion":"1.0","id":"4f648e62-8ff5-4c07-af3b-4376eef35c65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"abdb080f-909a-44c7-b954-e18cd3ee9310","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9967aab5-5786-4c89-8471-cf5d99ddddc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube"}}
	{"specversion":"1.0","id":"29369e39-14d4-40c2-8131-94de8385ba1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"027b7486-b9f9-4302-8e1a-175f7eab3e8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"90fd2666-610b-4d62-8133-c52fe8e3f40d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4180e7d0-159d-44bd-90bb-522ae60f350a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-827000\" primary control-plane node in \"json-output-827000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5366c4c7-60f3-4c63-ae0d-889baaecb402","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"103c4f00-0168-4e1d-82e6-ed7d05e128f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-827000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c82c49b-ca05-403c-aa3d-473228b63c94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"6304a857-3c8e-434e-b9a5-59fb27b36ae7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"767157d2-5454-400e-815e-c3dfbb791d32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-827000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"c83d79f2-56de-4a12-8c04-c4eaaadb0d72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"bcde52d1-047b-4e99-8751-8273720aaac1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-827000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-827000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-827000 --output=json --user=testUser: exit status 83 (78.640167ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6192c744-1205-44f8-9138-2ec6041aabdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-827000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"9b955b2a-3f4a-4972-927e-8af5814e5d49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-827000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-827000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-827000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-827000 --output=json --user=testUser: exit status 83 (45.309208ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-827000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-827000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-827000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-827000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-402000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-402000 --driver=qemu2 : exit status 80 (9.901091208s)

                                                
                                                
-- stdout --
	* [first-402000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-402000" primary control-plane node in "first-402000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-402000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-402000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-402000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-31 12:21:18.184743 -0700 PDT m=+441.963182042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-404000 -n second-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-404000 -n second-404000: exit status 85 (79.781709ms)

                                                
                                                
-- stdout --
	* Profile "second-404000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-404000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-404000" host is not running, skipping log retrieval (state="* Profile \"second-404000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-404000\"")
helpers_test.go:175: Cleaning up "second-404000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-404000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-31 12:21:18.368322 -0700 PDT m=+442.146767042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-402000 -n first-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-402000 -n first-402000: exit status 7 (30.109125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-402000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-402000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-402000
--- FAIL: TestMinikubeProfile (10.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-653000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-653000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.992910292s)

                                                
                                                
-- stdout --
	* [mount-start-1-653000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-653000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-653000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-653000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-653000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-653000 -n mount-start-1-653000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-653000 -n mount-start-1-653000: exit status 7 (67.728291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-653000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-684000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-684000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.826125541s)

                                                
                                                
-- stdout --
	* [multinode-684000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-684000" primary control-plane node in "multinode-684000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-684000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:21:28.743147    8075 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:21:28.743313    8075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:28.743317    8075 out.go:304] Setting ErrFile to fd 2...
	I0731 12:21:28.743319    8075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:21:28.743465    8075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:21:28.744492    8075 out.go:298] Setting JSON to false
	I0731 12:21:28.760446    8075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4851,"bootTime":1722448837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:21:28.760530    8075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:21:28.766866    8075 out.go:177] * [multinode-684000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:21:28.774794    8075 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:21:28.774838    8075 notify.go:220] Checking for updates...
	I0731 12:21:28.781763    8075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:21:28.784749    8075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:21:28.787770    8075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:21:28.790743    8075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:21:28.793731    8075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:21:28.797016    8075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:21:28.801710    8075 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:21:28.808739    8075 start.go:297] selected driver: qemu2
	I0731 12:21:28.808746    8075 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:21:28.808752    8075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:21:28.811127    8075 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:21:28.813745    8075 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:21:28.816807    8075 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:21:28.816866    8075 cni.go:84] Creating CNI manager for ""
	I0731 12:21:28.816871    8075 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 12:21:28.816877    8075 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 12:21:28.816913    8075 start.go:340] cluster config:
	{Name:multinode-684000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:21:28.820708    8075 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:21:28.826741    8075 out.go:177] * Starting "multinode-684000" primary control-plane node in "multinode-684000" cluster
	I0731 12:21:28.830811    8075 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:21:28.830831    8075 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:21:28.830843    8075 cache.go:56] Caching tarball of preloaded images
	I0731 12:21:28.830922    8075 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:21:28.830935    8075 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:21:28.831175    8075 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/multinode-684000/config.json ...
	I0731 12:21:28.831188    8075 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/multinode-684000/config.json: {Name:mk586db2a4539ce667297ccc8f552bf001ee9c17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:21:28.831418    8075 start.go:360] acquireMachinesLock for multinode-684000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:21:28.831463    8075 start.go:364] duration metric: took 34.916µs to acquireMachinesLock for "multinode-684000"
	I0731 12:21:28.831477    8075 start.go:93] Provisioning new machine with config: &{Name:multinode-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:21:28.831504    8075 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:21:28.834765    8075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:21:28.852385    8075 start.go:159] libmachine.API.Create for "multinode-684000" (driver="qemu2")
	I0731 12:21:28.852412    8075 client.go:168] LocalClient.Create starting
	I0731 12:21:28.852493    8075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:21:28.852523    8075 main.go:141] libmachine: Decoding PEM data...
	I0731 12:21:28.852534    8075 main.go:141] libmachine: Parsing certificate...
	I0731 12:21:28.852571    8075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:21:28.852595    8075 main.go:141] libmachine: Decoding PEM data...
	I0731 12:21:28.852607    8075 main.go:141] libmachine: Parsing certificate...
	I0731 12:21:28.852946    8075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:21:29.005629    8075 main.go:141] libmachine: Creating SSH key...
	I0731 12:21:29.099269    8075 main.go:141] libmachine: Creating Disk image...
	I0731 12:21:29.099277    8075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:21:29.099463    8075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2
	I0731 12:21:29.108827    8075 main.go:141] libmachine: STDOUT: 
	I0731 12:21:29.108851    8075 main.go:141] libmachine: STDERR: 
	I0731 12:21:29.108898    8075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2 +20000M
	I0731 12:21:29.116630    8075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:21:29.116644    8075 main.go:141] libmachine: STDERR: 
	I0731 12:21:29.116660    8075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2
	I0731 12:21:29.116665    8075 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:21:29.116679    8075 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:21:29.116716    8075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:72:88:be:54:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2
	I0731 12:21:29.118307    8075 main.go:141] libmachine: STDOUT: 
	I0731 12:21:29.118321    8075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:21:29.118337    8075 client.go:171] duration metric: took 265.928334ms to LocalClient.Create
	I0731 12:21:31.120443    8075 start.go:128] duration metric: took 2.288997584s to createHost
	I0731 12:21:31.120609    8075 start.go:83] releasing machines lock for "multinode-684000", held for 2.289102458s
	W0731 12:21:31.120680    8075 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:21:31.131709    8075 out.go:177] * Deleting "multinode-684000" in qemu2 ...
	W0731 12:21:31.167994    8075 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:21:31.168017    8075 start.go:729] Will try again in 5 seconds ...
	I0731 12:21:36.170100    8075 start.go:360] acquireMachinesLock for multinode-684000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:21:36.170761    8075 start.go:364] duration metric: took 505.875µs to acquireMachinesLock for "multinode-684000"
	I0731 12:21:36.170918    8075 start.go:93] Provisioning new machine with config: &{Name:multinode-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:21:36.171257    8075 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:21:36.188175    8075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:21:36.241131    8075 start.go:159] libmachine.API.Create for "multinode-684000" (driver="qemu2")
	I0731 12:21:36.241189    8075 client.go:168] LocalClient.Create starting
	I0731 12:21:36.241290    8075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:21:36.241358    8075 main.go:141] libmachine: Decoding PEM data...
	I0731 12:21:36.241372    8075 main.go:141] libmachine: Parsing certificate...
	I0731 12:21:36.241440    8075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:21:36.241484    8075 main.go:141] libmachine: Decoding PEM data...
	I0731 12:21:36.241500    8075 main.go:141] libmachine: Parsing certificate...
	I0731 12:21:36.242204    8075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:21:36.406454    8075 main.go:141] libmachine: Creating SSH key...
	I0731 12:21:36.477357    8075 main.go:141] libmachine: Creating Disk image...
	I0731 12:21:36.477362    8075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:21:36.477545    8075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2
	I0731 12:21:36.486726    8075 main.go:141] libmachine: STDOUT: 
	I0731 12:21:36.486744    8075 main.go:141] libmachine: STDERR: 
	I0731 12:21:36.486789    8075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2 +20000M
	I0731 12:21:36.494581    8075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:21:36.494596    8075 main.go:141] libmachine: STDERR: 
	I0731 12:21:36.494606    8075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2
	I0731 12:21:36.494612    8075 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:21:36.494623    8075 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:21:36.494671    8075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:eb:72:05:07:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2
	I0731 12:21:36.496264    8075 main.go:141] libmachine: STDOUT: 
	I0731 12:21:36.496277    8075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:21:36.496289    8075 client.go:171] duration metric: took 255.102542ms to LocalClient.Create
	I0731 12:21:38.498396    8075 start.go:128] duration metric: took 2.327189917s to createHost
	I0731 12:21:38.498471    8075 start.go:83] releasing machines lock for "multinode-684000", held for 2.327735666s
	W0731 12:21:38.498827    8075 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-684000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-684000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:21:38.512484    8075 out.go:177] 
	W0731 12:21:38.515560    8075 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:21:38.515605    8075 out.go:239] * 
	* 
	W0731 12:21:38.517949    8075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:21:38.527469    8075 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-684000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (66.928666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (116.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.284625ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-684000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- rollout status deployment/busybox: exit status 1 (55.818042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.674416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.453083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.928541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.291666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.143041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.759458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.516583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.035958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.581792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.997708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.69825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.289875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.290625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.65275ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.851ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (30.053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (116.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-684000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.189333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (30.223833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-684000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-684000 -v 3 --alsologtostderr: exit status 83 (39.858125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-684000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-684000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:34.961974    8165 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:23:34.962122    8165 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:34.962126    8165 out.go:304] Setting ErrFile to fd 2...
	I0731 12:23:34.962128    8165 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:34.962255    8165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:23:34.962479    8165 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:23:34.962677    8165 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:23:34.966812    8165 out.go:177] * The control-plane node multinode-684000 host is not running: state=Stopped
	I0731 12:23:34.970769    8165 out.go:177]   To start a cluster, run: "minikube start -p multinode-684000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-684000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (29.362084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-684000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-684000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.625041ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-684000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-684000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-684000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (30.220167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-684000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-684000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-684000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-684000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (29.214375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status --output json --alsologtostderr: exit status 7 (30.494625ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-684000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:35.165235    8177 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:23:35.165378    8177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:35.165381    8177 out.go:304] Setting ErrFile to fd 2...
	I0731 12:23:35.165384    8177 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:35.165532    8177 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:23:35.165662    8177 out.go:298] Setting JSON to true
	I0731 12:23:35.165674    8177 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:23:35.165739    8177 notify.go:220] Checking for updates...
	I0731 12:23:35.165866    8177 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:23:35.165873    8177 status.go:255] checking status of multinode-684000 ...
	I0731 12:23:35.166075    8177 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:23:35.166079    8177 status.go:343] host is not running, skipping remaining checks
	I0731 12:23:35.166081    8177 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-684000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (29.263083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 node stop m03: exit status 85 (47.749083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-684000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status: exit status 7 (30.339875ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status --alsologtostderr: exit status 7 (30.013125ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:35.303530    8185 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:23:35.303678    8185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:35.303681    8185 out.go:304] Setting ErrFile to fd 2...
	I0731 12:23:35.303683    8185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:35.303798    8185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:23:35.303912    8185 out.go:298] Setting JSON to false
	I0731 12:23:35.303920    8185 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:23:35.303977    8185 notify.go:220] Checking for updates...
	I0731 12:23:35.304143    8185 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:23:35.304150    8185 status.go:255] checking status of multinode-684000 ...
	I0731 12:23:35.304361    8185 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:23:35.304365    8185 status.go:343] host is not running, skipping remaining checks
	I0731 12:23:35.304367    8185 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-684000 status --alsologtostderr": multinode-684000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (30.258708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.660708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:35.364299    8189 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:23:35.364739    8189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:35.364743    8189 out.go:304] Setting ErrFile to fd 2...
	I0731 12:23:35.364745    8189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:35.364889    8189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:23:35.365128    8189 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:23:35.365334    8189 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:23:35.368820    8189 out.go:177] 
	W0731 12:23:35.371914    8189 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0731 12:23:35.371918    8189 out.go:239] * 
	* 
	W0731 12:23:35.373870    8189 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:23:35.376792    8189 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0731 12:23:35.364299    8189 out.go:291] Setting OutFile to fd 1 ...
I0731 12:23:35.364739    8189 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:23:35.364743    8189 out.go:304] Setting ErrFile to fd 2...
I0731 12:23:35.364745    8189 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 12:23:35.364889    8189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
I0731 12:23:35.365128    8189 mustload.go:65] Loading cluster: multinode-684000
I0731 12:23:35.365334    8189 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 12:23:35.368820    8189 out.go:177] 
W0731 12:23:35.371914    8189 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0731 12:23:35.371918    8189 out.go:239] * 
* 
W0731 12:23:35.373870    8189 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 12:23:35.376792    8189 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-684000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr: exit status 7 (29.251958ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:35.409387    8191 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:23:35.409514    8191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:35.409517    8191 out.go:304] Setting ErrFile to fd 2...
	I0731 12:23:35.409523    8191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:35.409655    8191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:23:35.409772    8191 out.go:298] Setting JSON to false
	I0731 12:23:35.409781    8191 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:23:35.409833    8191 notify.go:220] Checking for updates...
	I0731 12:23:35.409981    8191 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:23:35.409988    8191 status.go:255] checking status of multinode-684000 ...
	I0731 12:23:35.410176    8191 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:23:35.410180    8191 status.go:343] host is not running, skipping remaining checks
	I0731 12:23:35.410183    8191 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr: exit status 7 (75.522125ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:36.293375    8193 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:23:36.293561    8193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:36.293565    8193 out.go:304] Setting ErrFile to fd 2...
	I0731 12:23:36.293569    8193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:36.293738    8193 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:23:36.293894    8193 out.go:298] Setting JSON to false
	I0731 12:23:36.293905    8193 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:23:36.293941    8193 notify.go:220] Checking for updates...
	I0731 12:23:36.294204    8193 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:23:36.294212    8193 status.go:255] checking status of multinode-684000 ...
	I0731 12:23:36.294460    8193 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:23:36.294465    8193 status.go:343] host is not running, skipping remaining checks
	I0731 12:23:36.294468    8193 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr: exit status 7 (75.257125ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:38.361653    8195 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:23:38.361818    8195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:38.361822    8195 out.go:304] Setting ErrFile to fd 2...
	I0731 12:23:38.361825    8195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:38.362001    8195 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:23:38.362174    8195 out.go:298] Setting JSON to false
	I0731 12:23:38.362184    8195 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:23:38.362223    8195 notify.go:220] Checking for updates...
	I0731 12:23:38.362445    8195 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:23:38.362454    8195 status.go:255] checking status of multinode-684000 ...
	I0731 12:23:38.362729    8195 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:23:38.362733    8195 status.go:343] host is not running, skipping remaining checks
	I0731 12:23:38.362736    8195 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr: exit status 7 (72.599375ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:41.441502    8197 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:23:41.441685    8197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:41.441689    8197 out.go:304] Setting ErrFile to fd 2...
	I0731 12:23:41.441693    8197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:41.441857    8197 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:23:41.442008    8197 out.go:298] Setting JSON to false
	I0731 12:23:41.442018    8197 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:23:41.442054    8197 notify.go:220] Checking for updates...
	I0731 12:23:41.442265    8197 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:23:41.442273    8197 status.go:255] checking status of multinode-684000 ...
	I0731 12:23:41.442537    8197 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:23:41.442542    8197 status.go:343] host is not running, skipping remaining checks
	I0731 12:23:41.442545    8197 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr: exit status 7 (73.570041ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:46.393137    8202 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:23:46.393327    8202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:46.393331    8202 out.go:304] Setting ErrFile to fd 2...
	I0731 12:23:46.393334    8202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:46.393495    8202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:23:46.393669    8202 out.go:298] Setting JSON to false
	I0731 12:23:46.393679    8202 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:23:46.393710    8202 notify.go:220] Checking for updates...
	I0731 12:23:46.393944    8202 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:23:46.393953    8202 status.go:255] checking status of multinode-684000 ...
	I0731 12:23:46.394225    8202 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:23:46.394230    8202 status.go:343] host is not running, skipping remaining checks
	I0731 12:23:46.394232    8202 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr: exit status 7 (73.70175ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:49.473978    8204 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:23:49.474201    8204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:49.474206    8204 out.go:304] Setting ErrFile to fd 2...
	I0731 12:23:49.474209    8204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:49.474401    8204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:23:49.474584    8204 out.go:298] Setting JSON to false
	I0731 12:23:49.474596    8204 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:23:49.474649    8204 notify.go:220] Checking for updates...
	I0731 12:23:49.474888    8204 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:23:49.474897    8204 status.go:255] checking status of multinode-684000 ...
	I0731 12:23:49.475201    8204 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:23:49.475206    8204 status.go:343] host is not running, skipping remaining checks
	I0731 12:23:49.475209    8204 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr: exit status 7 (71.351667ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:23:55.167073    8206 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:23:55.167264    8206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:55.167269    8206 out.go:304] Setting ErrFile to fd 2...
	I0731 12:23:55.167272    8206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:23:55.167458    8206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:23:55.167606    8206 out.go:298] Setting JSON to false
	I0731 12:23:55.167617    8206 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:23:55.167657    8206 notify.go:220] Checking for updates...
	I0731 12:23:55.167907    8206 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:23:55.167915    8206 status.go:255] checking status of multinode-684000 ...
	I0731 12:23:55.168182    8206 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:23:55.168187    8206 status.go:343] host is not running, skipping remaining checks
	I0731 12:23:55.168189    8206 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr: exit status 7 (73.286916ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:08.555459    8208 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:08.555623    8208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:08.555628    8208 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:08.555631    8208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:08.555796    8208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:24:08.555941    8208 out.go:298] Setting JSON to false
	I0731 12:24:08.555952    8208 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:24:08.555988    8208 notify.go:220] Checking for updates...
	I0731 12:24:08.556193    8208 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:08.556201    8208 status.go:255] checking status of multinode-684000 ...
	I0731 12:24:08.556493    8208 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:24:08.556497    8208 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:08.556500    8208 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr: exit status 7 (72.781917ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:20.468869    8215 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:20.469065    8215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:20.469069    8215 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:20.469072    8215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:20.469264    8215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:24:20.469406    8215 out.go:298] Setting JSON to false
	I0731 12:24:20.469418    8215 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:24:20.469453    8215 notify.go:220] Checking for updates...
	I0731 12:24:20.469674    8215 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:20.469682    8215 status.go:255] checking status of multinode-684000 ...
	I0731 12:24:20.469973    8215 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:24:20.469978    8215 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:20.469981    8215 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-684000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (33.291292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-684000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-684000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-684000: (3.516020375s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-684000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-684000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219962291s)

                                                
                                                
-- stdout --
	* [multinode-684000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-684000" primary control-plane node in "multinode-684000" cluster
	* Restarting existing qemu2 VM for "multinode-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:24.113392    8239 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:24.113544    8239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:24.113549    8239 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:24.113552    8239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:24.113732    8239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:24:24.114999    8239 out.go:298] Setting JSON to false
	I0731 12:24:24.134355    8239 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5027,"bootTime":1722448837,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:24:24.134432    8239 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:24:24.139040    8239 out.go:177] * [multinode-684000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:24:24.146006    8239 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:24:24.146037    8239 notify.go:220] Checking for updates...
	I0731 12:24:24.151952    8239 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:24:24.154965    8239 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:24:24.157920    8239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:24:24.160940    8239 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:24:24.163934    8239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:24:24.167250    8239 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:24.167306    8239 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:24:24.171963    8239 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:24:24.178910    8239 start.go:297] selected driver: qemu2
	I0731 12:24:24.178916    8239 start.go:901] validating driver "qemu2" against &{Name:multinode-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:24:24.178967    8239 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:24:24.181537    8239 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:24:24.181583    8239 cni.go:84] Creating CNI manager for ""
	I0731 12:24:24.181588    8239 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 12:24:24.181624    8239 start.go:340] cluster config:
	{Name:multinode-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-684000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:24:24.185392    8239 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:24.190967    8239 out.go:177] * Starting "multinode-684000" primary control-plane node in "multinode-684000" cluster
	I0731 12:24:24.194955    8239 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:24:24.194973    8239 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:24:24.194985    8239 cache.go:56] Caching tarball of preloaded images
	I0731 12:24:24.195053    8239 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:24:24.195071    8239 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:24:24.195126    8239 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/multinode-684000/config.json ...
	I0731 12:24:24.195514    8239 start.go:360] acquireMachinesLock for multinode-684000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:24:24.195552    8239 start.go:364] duration metric: took 31.459µs to acquireMachinesLock for "multinode-684000"
	I0731 12:24:24.195561    8239 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:24:24.195566    8239 fix.go:54] fixHost starting: 
	I0731 12:24:24.195695    8239 fix.go:112] recreateIfNeeded on multinode-684000: state=Stopped err=<nil>
	W0731 12:24:24.195703    8239 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:24:24.198960    8239 out.go:177] * Restarting existing qemu2 VM for "multinode-684000" ...
	I0731 12:24:24.206950    8239 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:24:24.206998    8239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:eb:72:05:07:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2
	I0731 12:24:24.209201    8239 main.go:141] libmachine: STDOUT: 
	I0731 12:24:24.209220    8239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:24:24.209249    8239 fix.go:56] duration metric: took 13.684083ms for fixHost
	I0731 12:24:24.209254    8239 start.go:83] releasing machines lock for "multinode-684000", held for 13.697834ms
	W0731 12:24:24.209262    8239 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:24:24.209293    8239 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:24:24.209298    8239 start.go:729] Will try again in 5 seconds ...
	I0731 12:24:29.211482    8239 start.go:360] acquireMachinesLock for multinode-684000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:24:29.211908    8239 start.go:364] duration metric: took 334.334µs to acquireMachinesLock for "multinode-684000"
	I0731 12:24:29.212022    8239 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:24:29.212040    8239 fix.go:54] fixHost starting: 
	I0731 12:24:29.212716    8239 fix.go:112] recreateIfNeeded on multinode-684000: state=Stopped err=<nil>
	W0731 12:24:29.212742    8239 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:24:29.221067    8239 out.go:177] * Restarting existing qemu2 VM for "multinode-684000" ...
	I0731 12:24:29.225101    8239 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:24:29.225269    8239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:eb:72:05:07:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2
	I0731 12:24:29.234182    8239 main.go:141] libmachine: STDOUT: 
	I0731 12:24:29.234251    8239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:24:29.234335    8239 fix.go:56] duration metric: took 22.295ms for fixHost
	I0731 12:24:29.234352    8239 start.go:83] releasing machines lock for "multinode-684000", held for 22.417333ms
	W0731 12:24:29.234556    8239 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-684000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-684000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:24:29.242112    8239 out.go:177] 
	W0731 12:24:29.246149    8239 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:24:29.246179    8239 out.go:239] * 
	* 
	W0731 12:24:29.248625    8239 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:24:29.257051    8239 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-684000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-684000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (32.317458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 node delete m03: exit status 83 (40.864459ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-684000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-684000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-684000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status --alsologtostderr: exit status 7 (29.514292ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:29.440766    8253 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:29.440929    8253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:29.440933    8253 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:29.440935    8253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:29.441067    8253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:24:29.441175    8253 out.go:298] Setting JSON to false
	I0731 12:24:29.441184    8253 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:24:29.441244    8253 notify.go:220] Checking for updates...
	I0731 12:24:29.441365    8253 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:29.441372    8253 status.go:255] checking status of multinode-684000 ...
	I0731 12:24:29.441583    8253 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:24:29.441587    8253 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:29.441590    8253 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-684000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (29.756333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-684000 stop: (1.949262625s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status: exit status 7 (66.482375ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-684000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-684000 status --alsologtostderr: exit status 7 (32.122208ms)

                                                
                                                
-- stdout --
	multinode-684000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:31.518964    8271 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:31.519119    8271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:31.519122    8271 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:31.519125    8271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:31.519268    8271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:24:31.519394    8271 out.go:298] Setting JSON to false
	I0731 12:24:31.519403    8271 mustload.go:65] Loading cluster: multinode-684000
	I0731 12:24:31.519464    8271 notify.go:220] Checking for updates...
	I0731 12:24:31.519584    8271 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:31.519590    8271 status.go:255] checking status of multinode-684000 ...
	I0731 12:24:31.519818    8271 status.go:330] multinode-684000 host status = "Stopped" (err=<nil>)
	I0731 12:24:31.519822    8271 status.go:343] host is not running, skipping remaining checks
	I0731 12:24:31.519824    8271 status.go:257] multinode-684000 status: &{Name:multinode-684000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-684000 status --alsologtostderr": multinode-684000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-684000 status --alsologtostderr": multinode-684000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (29.819875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-684000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-684000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.178624667s)

                                                
                                                
-- stdout --
	* [multinode-684000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-684000" primary control-plane node in "multinode-684000" cluster
	* Restarting existing qemu2 VM for "multinode-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-684000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:31.578424    8275 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:31.578562    8275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:31.578565    8275 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:31.578567    8275 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:31.578698    8275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:24:31.579643    8275 out.go:298] Setting JSON to false
	I0731 12:24:31.595484    8275 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5034,"bootTime":1722448837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:24:31.595565    8275 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:24:31.600376    8275 out.go:177] * [multinode-684000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:24:31.607332    8275 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:24:31.607386    8275 notify.go:220] Checking for updates...
	I0731 12:24:31.615314    8275 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:24:31.619283    8275 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:24:31.622337    8275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:24:31.625362    8275 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:24:31.628339    8275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:24:31.631575    8275 config.go:182] Loaded profile config "multinode-684000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:31.631827    8275 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:24:31.636330    8275 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:24:31.643299    8275 start.go:297] selected driver: qemu2
	I0731 12:24:31.643304    8275 start.go:901] validating driver "qemu2" against &{Name:multinode-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:24:31.643356    8275 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:24:31.645737    8275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:24:31.645759    8275 cni.go:84] Creating CNI manager for ""
	I0731 12:24:31.645764    8275 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 12:24:31.645815    8275 start.go:340] cluster config:
	{Name:multinode-684000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-684000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:24:31.649361    8275 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:31.654368    8275 out.go:177] * Starting "multinode-684000" primary control-plane node in "multinode-684000" cluster
	I0731 12:24:31.658324    8275 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:24:31.658341    8275 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:24:31.658353    8275 cache.go:56] Caching tarball of preloaded images
	I0731 12:24:31.658425    8275 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:24:31.658431    8275 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:24:31.658489    8275 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/multinode-684000/config.json ...
	I0731 12:24:31.658879    8275 start.go:360] acquireMachinesLock for multinode-684000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:24:31.658906    8275 start.go:364] duration metric: took 22.125µs to acquireMachinesLock for "multinode-684000"
	I0731 12:24:31.658915    8275 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:24:31.658920    8275 fix.go:54] fixHost starting: 
	I0731 12:24:31.659037    8275 fix.go:112] recreateIfNeeded on multinode-684000: state=Stopped err=<nil>
	W0731 12:24:31.659045    8275 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:24:31.663300    8275 out.go:177] * Restarting existing qemu2 VM for "multinode-684000" ...
	I0731 12:24:31.671192    8275 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:24:31.671230    8275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:eb:72:05:07:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2
	I0731 12:24:31.673061    8275 main.go:141] libmachine: STDOUT: 
	I0731 12:24:31.673080    8275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:24:31.673108    8275 fix.go:56] duration metric: took 14.189291ms for fixHost
	I0731 12:24:31.673112    8275 start.go:83] releasing machines lock for "multinode-684000", held for 14.201708ms
	W0731 12:24:31.673119    8275 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:24:31.673148    8275 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:24:31.673153    8275 start.go:729] Will try again in 5 seconds ...
	I0731 12:24:36.675138    8275 start.go:360] acquireMachinesLock for multinode-684000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:24:36.675555    8275 start.go:364] duration metric: took 331.458µs to acquireMachinesLock for "multinode-684000"
	I0731 12:24:36.675747    8275 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:24:36.675765    8275 fix.go:54] fixHost starting: 
	I0731 12:24:36.676508    8275 fix.go:112] recreateIfNeeded on multinode-684000: state=Stopped err=<nil>
	W0731 12:24:36.676534    8275 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:24:36.680910    8275 out.go:177] * Restarting existing qemu2 VM for "multinode-684000" ...
	I0731 12:24:36.685025    8275 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:24:36.685281    8275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:eb:72:05:07:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/multinode-684000/disk.qcow2
	I0731 12:24:36.694106    8275 main.go:141] libmachine: STDOUT: 
	I0731 12:24:36.694171    8275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:24:36.694254    8275 fix.go:56] duration metric: took 18.490292ms for fixHost
	I0731 12:24:36.694274    8275 start.go:83] releasing machines lock for "multinode-684000", held for 18.651625ms
	W0731 12:24:36.694508    8275 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-684000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-684000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:24:36.701901    8275 out.go:177] 
	W0731 12:24:36.704936    8275 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:24:36.705010    8275 out.go:239] * 
	* 
	W0731 12:24:36.707508    8275 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:24:36.715864    8275 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-684000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (67.70375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-684000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-684000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-684000-m01 --driver=qemu2 : exit status 80 (10.056456333s)

                                                
                                                
-- stdout --
	* [multinode-684000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-684000-m01" primary control-plane node in "multinode-684000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-684000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-684000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-684000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-684000-m02 --driver=qemu2 : exit status 80 (10.093886667s)

                                                
                                                
-- stdout --
	* [multinode-684000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-684000-m02" primary control-plane node in "multinode-684000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-684000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-684000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-684000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-684000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-684000: exit status 83 (81.214209ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-684000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-684000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-684000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-684000 -n multinode-684000: exit status 7 (30.2165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-684000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.38s)

                                                
                                    
x
+
TestPreload (10.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-555000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-555000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.032549292s)

                                                
                                                
-- stdout --
	* [test-preload-555000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-555000" primary control-plane node in "test-preload-555000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-555000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:24:57.318108    8333 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:24:57.318301    8333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:57.318304    8333 out.go:304] Setting ErrFile to fd 2...
	I0731 12:24:57.318307    8333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:24:57.318445    8333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:24:57.319422    8333 out.go:298] Setting JSON to false
	I0731 12:24:57.335442    8333 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5060,"bootTime":1722448837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:24:57.335516    8333 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:24:57.341712    8333 out.go:177] * [test-preload-555000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:24:57.349673    8333 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:24:57.349710    8333 notify.go:220] Checking for updates...
	I0731 12:24:57.357700    8333 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:24:57.360684    8333 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:24:57.363685    8333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:24:57.366694    8333 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:24:57.369668    8333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:24:57.373038    8333 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:24:57.373082    8333 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:24:57.377670    8333 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:24:57.384638    8333 start.go:297] selected driver: qemu2
	I0731 12:24:57.384644    8333 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:24:57.384652    8333 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:24:57.386934    8333 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:24:57.389668    8333 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:24:57.392746    8333 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:24:57.392790    8333 cni.go:84] Creating CNI manager for ""
	I0731 12:24:57.392798    8333 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:24:57.392801    8333 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:24:57.392836    8333 start.go:340] cluster config:
	{Name:test-preload-555000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:24:57.396336    8333 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:57.403680    8333 out.go:177] * Starting "test-preload-555000" primary control-plane node in "test-preload-555000" cluster
	I0731 12:24:57.407693    8333 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0731 12:24:57.407767    8333 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/test-preload-555000/config.json ...
	I0731 12:24:57.407788    8333 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/test-preload-555000/config.json: {Name:mkce972be26fccbf8037d919e0bbfc579c4369c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:24:57.407811    8333 cache.go:107] acquiring lock: {Name:mkc9c2a55366b75483b97d66ae57ca3561b9b713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:57.407826    8333 cache.go:107] acquiring lock: {Name:mkf34d062f2b7fc1919e9158cb8ac230cadfb56a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:57.407838    8333 cache.go:107] acquiring lock: {Name:mka51fcc16ccad955cd89b16fe59ca0ac5e61c86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:57.407811    8333 cache.go:107] acquiring lock: {Name:mk9272971e705a314fde8caf791b00ae5a154e1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:57.408031    8333 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 12:24:57.408049    8333 start.go:360] acquireMachinesLock for test-preload-555000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:24:57.408040    8333 cache.go:107] acquiring lock: {Name:mk1936d77eea5a8b3f2ae765cc2d417873fde172 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:57.408048    8333 cache.go:107] acquiring lock: {Name:mkd484c759cc96d75bde8607dca6e3147a087016 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:57.408086    8333 start.go:364] duration metric: took 30.584µs to acquireMachinesLock for "test-preload-555000"
	I0731 12:24:57.408095    8333 cache.go:107] acquiring lock: {Name:mk32c9bdfc0b1617274eafa47d6f632eda040485 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:57.408127    8333 cache.go:107] acquiring lock: {Name:mk7da52c7d192ae42373d9849dd097df3819ab6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:24:57.408099    8333 start.go:93] Provisioning new machine with config: &{Name:test-preload-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:24:57.408158    8333 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 12:24:57.408159    8333 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:24:57.408034    8333 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 12:24:57.408163    8333 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:24:57.408201    8333 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:24:57.408363    8333 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 12:24:57.408365    8333 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:24:57.411641    8333 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:24:57.412176    8333 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:24:57.419392    8333 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:24:57.419535    8333 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 12:24:57.419955    8333 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 12:24:57.419975    8333 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:24:57.422290    8333 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 12:24:57.422739    8333 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:24:57.422808    8333 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 12:24:57.423010    8333 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:24:57.428937    8333 start.go:159] libmachine.API.Create for "test-preload-555000" (driver="qemu2")
	I0731 12:24:57.428960    8333 client.go:168] LocalClient.Create starting
	I0731 12:24:57.429024    8333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:24:57.429054    8333 main.go:141] libmachine: Decoding PEM data...
	I0731 12:24:57.429067    8333 main.go:141] libmachine: Parsing certificate...
	I0731 12:24:57.429104    8333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:24:57.429126    8333 main.go:141] libmachine: Decoding PEM data...
	I0731 12:24:57.429140    8333 main.go:141] libmachine: Parsing certificate...
	I0731 12:24:57.429488    8333 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:24:57.613462    8333 main.go:141] libmachine: Creating SSH key...
	I0731 12:24:57.721646    8333 main.go:141] libmachine: Creating Disk image...
	I0731 12:24:57.721714    8333 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:24:57.722338    8333 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2
	I0731 12:24:57.731685    8333 main.go:141] libmachine: STDOUT: 
	I0731 12:24:57.731712    8333 main.go:141] libmachine: STDERR: 
	I0731 12:24:57.731769    8333 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2 +20000M
	I0731 12:24:57.740884    8333 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:24:57.740904    8333 main.go:141] libmachine: STDERR: 
	I0731 12:24:57.740917    8333 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2
	I0731 12:24:57.740922    8333 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:24:57.740943    8333 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:24:57.740969    8333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:11:f2:7b:da:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2
	I0731 12:24:57.743230    8333 main.go:141] libmachine: STDOUT: 
	I0731 12:24:57.743245    8333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:24:57.743260    8333 client.go:171] duration metric: took 314.308041ms to LocalClient.Create
	I0731 12:24:57.812545    8333 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0731 12:24:57.817077    8333 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0731 12:24:57.820809    8333 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:24:57.843946    8333 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:24:57.869492    8333 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0731 12:24:57.943208    8333 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0731 12:24:57.970933    8333 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0731 12:24:57.970954    8333 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 563.035834ms
	I0731 12:24:57.970979    8333 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0731 12:24:58.002535    8333 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:24:58.002616    8333 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W0731 12:24:58.573421    8333 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:24:58.573527    8333 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 12:24:58.791631    8333 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 12:24:58.791685    8333 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.383917083s
	I0731 12:24:58.791710    8333 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 12:24:59.743398    8333 start.go:128] duration metric: took 2.335204417s to createHost
	I0731 12:24:59.743456    8333 start.go:83] releasing machines lock for "test-preload-555000", held for 2.335440375s
	W0731 12:24:59.743543    8333 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:24:59.757492    8333 out.go:177] * Deleting "test-preload-555000" in qemu2 ...
	W0731 12:24:59.791958    8333 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:24:59.791987    8333 start.go:729] Will try again in 5 seconds ...
	I0731 12:25:00.127392    8333 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0731 12:25:00.127437    8333 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.71969875s
	I0731 12:25:00.127460    8333 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0731 12:25:00.707660    8333 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0731 12:25:00.707708    8333 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.299774542s
	I0731 12:25:00.707736    8333 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0731 12:25:02.776683    8333 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0731 12:25:02.776767    8333 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.369141125s
	I0731 12:25:02.776792    8333 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0731 12:25:03.168274    8333 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0731 12:25:03.168323    8333 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.760697833s
	I0731 12:25:03.168382    8333 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0731 12:25:03.217363    8333 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0731 12:25:03.217407    8333 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.809545833s
	I0731 12:25:03.217427    8333 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0731 12:25:04.791996    8333 start.go:360] acquireMachinesLock for test-preload-555000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:25:04.792385    8333 start.go:364] duration metric: took 319.292µs to acquireMachinesLock for "test-preload-555000"
	I0731 12:25:04.792505    8333 start.go:93] Provisioning new machine with config: &{Name:test-preload-555000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-555000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:25:04.792730    8333 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:25:04.804317    8333 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:25:04.855725    8333 start.go:159] libmachine.API.Create for "test-preload-555000" (driver="qemu2")
	I0731 12:25:04.855772    8333 client.go:168] LocalClient.Create starting
	I0731 12:25:04.855890    8333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:25:04.855976    8333 main.go:141] libmachine: Decoding PEM data...
	I0731 12:25:04.855999    8333 main.go:141] libmachine: Parsing certificate...
	I0731 12:25:04.856074    8333 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:25:04.856118    8333 main.go:141] libmachine: Decoding PEM data...
	I0731 12:25:04.856137    8333 main.go:141] libmachine: Parsing certificate...
	I0731 12:25:04.856649    8333 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:25:05.024229    8333 main.go:141] libmachine: Creating SSH key...
	I0731 12:25:05.255627    8333 main.go:141] libmachine: Creating Disk image...
	I0731 12:25:05.255635    8333 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:25:05.255853    8333 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2
	I0731 12:25:05.265853    8333 main.go:141] libmachine: STDOUT: 
	I0731 12:25:05.265872    8333 main.go:141] libmachine: STDERR: 
	I0731 12:25:05.265927    8333 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2 +20000M
	I0731 12:25:05.274020    8333 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:25:05.274033    8333 main.go:141] libmachine: STDERR: 
	I0731 12:25:05.274043    8333 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2
	I0731 12:25:05.274049    8333 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:25:05.274070    8333 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:25:05.274112    8333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:1b:aa:02:49:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/test-preload-555000/disk.qcow2
	I0731 12:25:05.275822    8333 main.go:141] libmachine: STDOUT: 
	I0731 12:25:05.275839    8333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:25:05.275852    8333 client.go:171] duration metric: took 420.0885ms to LocalClient.Create
	I0731 12:25:07.209983    8333 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0731 12:25:07.210038    8333 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.802338875s
	I0731 12:25:07.210069    8333 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0731 12:25:07.210116    8333 cache.go:87] Successfully saved all images to host disk.
	I0731 12:25:07.277996    8333 start.go:128] duration metric: took 2.485324833s to createHost
	I0731 12:25:07.278033    8333 start.go:83] releasing machines lock for "test-preload-555000", held for 2.485704458s
	W0731 12:25:07.278314    8333 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-555000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-555000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:07.286795    8333 out.go:177] 
	W0731 12:25:07.294938    8333 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:25:07.294964    8333 out.go:239] * 
	* 
	W0731 12:25:07.297608    8333 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:25:07.307786    8333 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-555000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-31 12:25:07.324929 -0700 PDT m=+671.111156001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-555000 -n test-preload-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-555000 -n test-preload-555000: exit status 7 (68.403959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-555000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-555000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-555000
--- FAIL: TestPreload (10.18s)

                                                
                                    
x
+
TestScheduledStopUnix (10.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-297000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-297000 --memory=2048 --driver=qemu2 : exit status 80 (10.048252042s)

                                                
                                                
-- stdout --
	* [scheduled-stop-297000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-297000" primary control-plane node in "scheduled-stop-297000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-297000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-297000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-297000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-297000" primary control-plane node in "scheduled-stop-297000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-297000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-297000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-31 12:25:17.518875 -0700 PDT m=+681.305449042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-297000 -n scheduled-stop-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-297000 -n scheduled-stop-297000: exit status 7 (69.214125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-297000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-297000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-297000
--- FAIL: TestScheduledStopUnix (10.19s)

                                                
                                    
x
+
TestSkaffold (12.26s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2439677058 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2439677058 version: (1.062675417s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-167000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-167000 --memory=2600 --driver=qemu2 : exit status 80 (9.843204667s)

                                                
                                                
-- stdout --
	* [skaffold-167000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-167000" primary control-plane node in "skaffold-167000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-167000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-167000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-167000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-167000" primary control-plane node in "skaffold-167000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-167000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-167000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-31 12:25:29.783752 -0700 PDT m=+693.570742667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-167000 -n skaffold-167000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-167000 -n skaffold-167000: exit status 7 (63.946542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-167000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-167000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-167000
--- FAIL: TestSkaffold (12.26s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (625s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.57235153 start -p running-upgrade-992000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.57235153 start -p running-upgrade-992000 --memory=2200 --vm-driver=qemu2 : (1m1.38693s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-992000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-992000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m49.518930042s)

                                                
                                                
-- stdout --
	* [running-upgrade-992000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-992000" primary control-plane node in "running-upgrade-992000" cluster
	* Updating the running qemu2 "running-upgrade-992000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:26:53.882150    8653 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:26:53.882269    8653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:53.882274    8653 out.go:304] Setting ErrFile to fd 2...
	I0731 12:26:53.882276    8653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:53.882426    8653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:26:53.883499    8653 out.go:298] Setting JSON to false
	I0731 12:26:53.900375    8653 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5176,"bootTime":1722448837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:26:53.900469    8653 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:26:53.904944    8653 out.go:177] * [running-upgrade-992000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:26:53.912022    8653 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:26:53.912062    8653 notify.go:220] Checking for updates...
	I0731 12:26:53.918946    8653 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:26:53.920225    8653 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:26:53.922972    8653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:26:53.926022    8653 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:26:53.929022    8653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:26:53.932255    8653 config.go:182] Loaded profile config "running-upgrade-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:26:53.934964    8653 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:26:53.938025    8653 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:26:53.940996    8653 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:26:53.948052    8653 start.go:297] selected driver: qemu2
	I0731 12:26:53.948057    8653 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51329 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:26:53.948101    8653 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:26:53.950359    8653 cni.go:84] Creating CNI manager for ""
	I0731 12:26:53.950381    8653 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:26:53.950412    8653 start.go:340] cluster config:
	{Name:running-upgrade-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51329 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:26:53.950460    8653 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:53.958958    8653 out.go:177] * Starting "running-upgrade-992000" primary control-plane node in "running-upgrade-992000" cluster
	I0731 12:26:53.961913    8653 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:26:53.961924    8653 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 12:26:53.961932    8653 cache.go:56] Caching tarball of preloaded images
	I0731 12:26:53.961974    8653 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:26:53.961978    8653 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 12:26:53.962027    8653 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/config.json ...
	I0731 12:26:53.962337    8653 start.go:360] acquireMachinesLock for running-upgrade-992000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:27:06.346343    8653 start.go:364] duration metric: took 12.384411292s to acquireMachinesLock for "running-upgrade-992000"
	I0731 12:27:06.346386    8653 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:27:06.346395    8653 fix.go:54] fixHost starting: 
	I0731 12:27:06.347199    8653 fix.go:112] recreateIfNeeded on running-upgrade-992000: state=Running err=<nil>
	W0731 12:27:06.347210    8653 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:27:06.351367    8653 out.go:177] * Updating the running qemu2 "running-upgrade-992000" VM ...
	I0731 12:27:06.361255    8653 machine.go:94] provisionDockerMachine start ...
	I0731 12:27:06.361319    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.361441    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.361445    8653 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 12:27:06.421914    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-992000
	
	I0731 12:27:06.421930    8653 buildroot.go:166] provisioning hostname "running-upgrade-992000"
	I0731 12:27:06.421975    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.422098    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.422103    8653 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-992000 && echo "running-upgrade-992000" | sudo tee /etc/hostname
	I0731 12:27:06.495256    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-992000
	
	I0731 12:27:06.495315    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.495454    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.495465    8653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-992000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-992000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-992000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:27:06.560763    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:27:06.560775    8653 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19355-6494/.minikube CaCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19355-6494/.minikube}
	I0731 12:27:06.560788    8653 buildroot.go:174] setting up certificates
	I0731 12:27:06.560793    8653 provision.go:84] configureAuth start
	I0731 12:27:06.560800    8653 provision.go:143] copyHostCerts
	I0731 12:27:06.560872    8653 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem, removing ...
	I0731 12:27:06.560880    8653 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem
	I0731 12:27:06.561003    8653 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem (1082 bytes)
	I0731 12:27:06.561194    8653 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem, removing ...
	I0731 12:27:06.561198    8653 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem
	I0731 12:27:06.561242    8653 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem (1123 bytes)
	I0731 12:27:06.561347    8653 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem, removing ...
	I0731 12:27:06.561350    8653 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem
	I0731 12:27:06.561391    8653 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem (1675 bytes)
	I0731 12:27:06.561479    8653 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-992000 san=[127.0.0.1 localhost minikube running-upgrade-992000]
	I0731 12:27:06.695327    8653 provision.go:177] copyRemoteCerts
	I0731 12:27:06.695360    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:27:06.695368    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	I0731 12:27:06.728419    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:27:06.735754    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:27:06.745449    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:27:06.752914    8653 provision.go:87] duration metric: took 192.119541ms to configureAuth
	I0731 12:27:06.752927    8653 buildroot.go:189] setting minikube options for container-runtime
	I0731 12:27:06.753055    8653 config.go:182] Loaded profile config "running-upgrade-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:27:06.753095    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.753191    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.753198    8653 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 12:27:06.815531    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 12:27:06.815541    8653 buildroot.go:70] root file system type: tmpfs
	I0731 12:27:06.815601    8653 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 12:27:06.815655    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.815768    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.815801    8653 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 12:27:06.881031    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 12:27:06.881087    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.881205    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.881214    8653 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 12:27:06.942598    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:27:06.942611    8653 machine.go:97] duration metric: took 581.369375ms to provisionDockerMachine
	I0731 12:27:06.942617    8653 start.go:293] postStartSetup for "running-upgrade-992000" (driver="qemu2")
	I0731 12:27:06.942624    8653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:27:06.942689    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:27:06.942698    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	I0731 12:27:06.975416    8653 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:27:06.976843    8653 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 12:27:06.976852    8653 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-6494/.minikube/addons for local assets ...
	I0731 12:27:06.976927    8653 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-6494/.minikube/files for local assets ...
	I0731 12:27:06.977012    8653 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0731 12:27:06.977102    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:27:06.980250    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0731 12:27:06.987888    8653 start.go:296] duration metric: took 45.266959ms for postStartSetup
	I0731 12:27:06.987905    8653 fix.go:56] duration metric: took 641.53525ms for fixHost
	I0731 12:27:06.987941    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.988055    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.988060    8653 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 12:27:07.049589    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454027.157936791
	
	I0731 12:27:07.049599    8653 fix.go:216] guest clock: 1722454027.157936791
	I0731 12:27:07.049603    8653 fix.go:229] Guest: 2024-07-31 12:27:07.157936791 -0700 PDT Remote: 2024-07-31 12:27:06.987907 -0700 PDT m=+13.127075001 (delta=170.029791ms)
	I0731 12:27:07.049615    8653 fix.go:200] guest clock delta is within tolerance: 170.029791ms
	I0731 12:27:07.049618    8653 start.go:83] releasing machines lock for "running-upgrade-992000", held for 703.279917ms
	I0731 12:27:07.049688    8653 ssh_runner.go:195] Run: cat /version.json
	I0731 12:27:07.049701    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	I0731 12:27:07.049688    8653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:27:07.049731    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	W0731 12:27:07.080551    8653 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:27:07.080623    8653 ssh_runner.go:195] Run: systemctl --version
	I0731 12:27:07.082584    8653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 12:27:07.084253    8653 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 12:27:07.084282    8653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 12:27:07.087631    8653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 12:27:07.092573    8653 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:27:07.092583    8653 start.go:495] detecting cgroup driver to use...
	I0731 12:27:07.092657    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:27:07.098586    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 12:27:07.101794    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 12:27:07.104978    8653 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 12:27:07.105008    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 12:27:07.108284    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:27:07.111572    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 12:27:07.114408    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:27:07.117150    8653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:27:07.120640    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 12:27:07.123481    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 12:27:07.126388    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 12:27:07.129224    8653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:27:07.132609    8653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:27:07.136156    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:07.254192    8653 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 12:27:07.265023    8653 start.go:495] detecting cgroup driver to use...
	I0731 12:27:07.265099    8653 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 12:27:07.275903    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:27:07.282643    8653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:27:07.293362    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:27:07.330348    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:27:07.335863    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:27:07.341790    8653 ssh_runner.go:195] Run: which cri-dockerd
	I0731 12:27:07.343119    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 12:27:07.345505    8653 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 12:27:07.350583    8653 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 12:27:07.456130    8653 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 12:27:07.573974    8653 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 12:27:07.574039    8653 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 12:27:07.580215    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:07.683035    8653 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:27:24.196025    8653 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.513533041s)
	I0731 12:27:24.196085    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 12:27:24.200932    8653 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0731 12:27:24.207757    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:27:24.212586    8653 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 12:27:24.299484    8653 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 12:27:24.383781    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:24.476060    8653 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 12:27:24.482532    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:27:24.487190    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:24.573976    8653 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 12:27:24.611740    8653 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 12:27:24.611831    8653 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 12:27:24.614093    8653 start.go:563] Will wait 60s for crictl version
	I0731 12:27:24.614151    8653 ssh_runner.go:195] Run: which crictl
	I0731 12:27:24.615554    8653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:27:24.627488    8653 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 12:27:24.627567    8653 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:27:24.640457    8653 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:27:24.664228    8653 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 12:27:24.664353    8653 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 12:27:24.665967    8653 kubeadm.go:883] updating cluster {Name:running-upgrade-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51329 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 12:27:24.666011    8653 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:27:24.666052    8653 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:27:24.676295    8653 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:27:24.676304    8653 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:27:24.676353    8653 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:27:24.680016    8653 ssh_runner.go:195] Run: which lz4
	I0731 12:27:24.681466    8653 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 12:27:24.682661    8653 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 12:27:24.682671    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 12:27:25.742051    8653 docker.go:649] duration metric: took 1.060651084s to copy over tarball
	I0731 12:27:25.742114    8653 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 12:27:26.836932    8653 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.094842708s)
	I0731 12:27:26.836945    8653 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 12:27:26.852844    8653 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:27:26.856447    8653 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 12:27:26.861200    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:26.938995    8653 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:27:28.160512    8653 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.221542375s)
	I0731 12:27:28.160603    8653 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:27:28.177628    8653 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:27:28.177637    8653 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:27:28.177672    8653 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 12:27:28.184119    8653 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:28.186055    8653 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:28.187164    8653 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:28.187192    8653 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:28.188625    8653 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:28.188738    8653 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:27:28.190169    8653 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:28.190330    8653 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:28.192054    8653 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:28.192158    8653 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:27:28.193609    8653 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:28.193643    8653 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:28.195105    8653 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:28.195176    8653 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:28.195715    8653 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:28.197084    8653 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:28.509963    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:28.520616    8653 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 12:27:28.520648    8653 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:28.520701    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:28.533520    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 12:27:28.540732    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 12:27:28.551707    8653 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 12:27:28.551730    8653 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 12:27:28.551776    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 12:27:28.562816    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:27:28.562910    8653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0731 12:27:28.564970    8653 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 12:27:28.564982    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0731 12:27:28.566431    8653 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:27:28.566542    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:28.577633    8653 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 12:27:28.577650    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 12:27:28.581312    8653 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 12:27:28.581332    8653 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:28.581393    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:28.585457    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:28.622230    8653 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 12:27:28.622265    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:27:28.622271    8653 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 12:27:28.622289    8653 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:28.622332    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:28.622366    8653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:27:28.624160    8653 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 12:27:28.624175    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 12:27:28.632170    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:28.641211    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 12:27:28.661039    8653 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 12:27:28.661062    8653 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:28.661127    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:28.691751    8653 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:27:28.691766    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 12:27:28.693959    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 12:27:28.695923    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:28.698287    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:28.743374    8653 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 12:27:28.743424    8653 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 12:27:28.743441    8653 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:28.743449    8653 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 12:27:28.743458    8653 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:28.743498    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:28.743498    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:28.760800    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:27:28.760800    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 12:27:28.760911    8653 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:27:28.762532    8653 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 12:27:28.762543    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0731 12:27:28.857205    8653 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:27:28.857312    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:28.891802    8653 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 12:27:28.891825    8653 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:28.891881    8653 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:29.013832    8653 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:27:29.013846    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 12:27:29.151143    8653 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 12:27:29.151185    8653 cache_images.go:92] duration metric: took 973.540667ms to LoadCachedImages
	W0731 12:27:29.151229    8653 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0731 12:27:29.151235    8653 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 12:27:29.151290    8653 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-992000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 12:27:29.151347    8653 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 12:27:29.168864    8653 cni.go:84] Creating CNI manager for ""
	I0731 12:27:29.168874    8653 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:27:29.168879    8653 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 12:27:29.168887    8653 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-992000 NodeName:running-upgrade-992000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:27:29.168948    8653 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-992000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:27:29.169003    8653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 12:27:29.172533    8653 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:27:29.172568    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:27:29.175448    8653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 12:27:29.180863    8653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:27:29.186196    8653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 12:27:29.192139    8653 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 12:27:29.193617    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:29.274127    8653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:27:29.279948    8653 certs.go:68] Setting up /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000 for IP: 10.0.2.15
	I0731 12:27:29.279956    8653 certs.go:194] generating shared ca certs ...
	I0731 12:27:29.279965    8653 certs.go:226] acquiring lock for ca certs: {Name:mkb4a8d4a0878ee693c06454a581729087f6c431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:29.280109    8653 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.key
	I0731 12:27:29.280146    8653 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.key
	I0731 12:27:29.280151    8653 certs.go:256] generating profile certs ...
	I0731 12:27:29.280209    8653 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/client.key
	I0731 12:27:29.280230    8653 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key.9acf5db4
	I0731 12:27:29.280240    8653 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt.9acf5db4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 12:27:29.400509    8653 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt.9acf5db4 ...
	I0731 12:27:29.400524    8653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt.9acf5db4: {Name:mkc1c39bac0a72486d4fe6d3978dc041d6066cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:29.401067    8653 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key.9acf5db4 ...
	I0731 12:27:29.401076    8653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key.9acf5db4: {Name:mk954b0da873280871a3f0f1aa7cbf167e33245a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:29.401252    8653 certs.go:381] copying /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt.9acf5db4 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt
	I0731 12:27:29.401387    8653 certs.go:385] copying /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key.9acf5db4 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key
	I0731 12:27:29.401542    8653 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/proxy-client.key
	I0731 12:27:29.401671    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986.pem (1338 bytes)
	W0731 12:27:29.401693    8653 certs.go:480] ignoring /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0731 12:27:29.401698    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 12:27:29.401716    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem (1082 bytes)
	I0731 12:27:29.401739    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:27:29.401757    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem (1675 bytes)
	I0731 12:27:29.401796    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0731 12:27:29.402107    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:27:29.410150    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:27:29.417974    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:27:29.425503    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 12:27:29.432140    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 12:27:29.438852    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 12:27:29.446264    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:27:29.454131    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 12:27:29.461231    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0731 12:27:29.468137    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:27:29.474574    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0731 12:27:29.481957    8653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:27:29.487173    8653 ssh_runner.go:195] Run: openssl version
	I0731 12:27:29.489389    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0731 12:27:29.492494    8653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0731 12:27:29.494035    8653 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:15 /usr/share/ca-certificates/69862.pem
	I0731 12:27:29.494057    8653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0731 12:27:29.496109    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:27:29.498917    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:27:29.502535    8653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:29.504222    8653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:29.504241    8653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:29.505942    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:27:29.509359    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0731 12:27:29.512568    8653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0731 12:27:29.513995    8653 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:15 /usr/share/ca-certificates/6986.pem
	I0731 12:27:29.514013    8653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0731 12:27:29.515847    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0731 12:27:29.518878    8653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 12:27:29.520609    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:27:29.522834    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:27:29.524825    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:27:29.526830    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:27:29.529184    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:27:29.530994    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:27:29.532886    8653 kubeadm.go:392] StartCluster: {Name:running-upgrade-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51329 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:27:29.532955    8653 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:27:29.543493    8653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:27:29.547052    8653 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 12:27:29.547058    8653 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 12:27:29.547084    8653 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 12:27:29.550744    8653 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:29.551082    8653 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-992000" does not appear in /Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:27:29.551191    8653 kubeconfig.go:62] /Users/jenkins/minikube-integration/19355-6494/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-992000" cluster setting kubeconfig missing "running-upgrade-992000" context setting]
	I0731 12:27:29.551400    8653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/kubeconfig: {Name:mk9ca083b2af3b573139e6661162870891ccebe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:29.551891    8653 kapi.go:59] client config for running-upgrade-992000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105dc81b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:27:29.552243    8653 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 12:27:29.555647    8653 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-992000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 12:27:29.555661    8653 kubeadm.go:1160] stopping kube-system containers ...
	I0731 12:27:29.555784    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:27:29.567749    8653 docker.go:483] Stopping containers: [1aba6908a33d d6e907091ea2 bed6d227dd72 78c6135bb2fd c5e97c00f1d3 e237a7c0d577 8d493cbead92 4f33093eb2c6 aa920439f439 1a684af57f21 f88a399a353f 953748af082f e02cec385629 41559b97a2f2 328fb38704be da726d200e7f dea099f0915a 785fe08d3a18 3f98821dfb26 c4e0d211ab07]
	I0731 12:27:29.567807    8653 ssh_runner.go:195] Run: docker stop 1aba6908a33d d6e907091ea2 bed6d227dd72 78c6135bb2fd c5e97c00f1d3 e237a7c0d577 8d493cbead92 4f33093eb2c6 aa920439f439 1a684af57f21 f88a399a353f 953748af082f e02cec385629 41559b97a2f2 328fb38704be da726d200e7f dea099f0915a 785fe08d3a18 3f98821dfb26 c4e0d211ab07
	I0731 12:27:29.579842    8653 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 12:27:29.680555    8653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:27:29.684822    8653 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jul 31 19:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 31 19:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 31 19:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 31 19:26 /etc/kubernetes/scheduler.conf
	
	I0731 12:27:29.684862    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/admin.conf
	I0731 12:27:29.688287    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:29.688320    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:27:29.691541    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/kubelet.conf
	I0731 12:27:29.694995    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:29.695028    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:27:29.698379    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/controller-manager.conf
	I0731 12:27:29.701445    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:29.701467    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:27:29.704243    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/scheduler.conf
	I0731 12:27:29.706863    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:29.706881    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:27:29.709978    8653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:27:29.712857    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:29.745023    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:30.345967    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:30.595401    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:30.624107    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:30.653311    8653 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:27:30.653385    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:31.155725    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:31.655759    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:32.153970    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:32.654082    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:32.658674    8653 api_server.go:72] duration metric: took 2.005431s to wait for apiserver process to appear ...
	I0731 12:27:32.658682    8653 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:27:32.658691    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:37.660674    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:37.660718    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:42.661402    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:42.661482    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:47.662186    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:47.662209    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:52.662864    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:52.662963    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:57.664138    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:57.664185    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:02.665343    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:02.665391    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:07.666947    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:07.666985    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:12.669107    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:12.669178    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:17.671267    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:17.671302    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:22.673375    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:22.673447    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:27.675898    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:27.675978    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:32.678563    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:32.678923    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:32.708360    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:28:32.708496    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:32.727315    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:28:32.727434    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:32.747223    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:28:32.747303    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:32.758424    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:28:32.758498    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:32.769175    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:28:32.769246    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:32.780229    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:28:32.780305    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:32.791948    8653 logs.go:276] 0 containers: []
	W0731 12:28:32.791959    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:32.792020    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:32.802760    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:28:32.802777    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:28:32.802782    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:28:32.814143    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:28:32.814153    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:28:32.828021    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:28:32.828036    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:28:32.843188    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:28:32.843200    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:28:32.854314    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:28:32.854324    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:28:32.865716    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:28:32.865727    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:28:32.878880    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:28:32.878891    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:28:32.895740    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:28:32.895752    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:28:32.910997    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:32.911007    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:32.938389    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:32.938395    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:32.942918    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:28:32.942927    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:28:32.956675    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:28:32.956691    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:28:32.969603    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:28:32.969614    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:28:32.981771    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:28:32.981781    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:32.993861    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:28:32.993873    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:28:33.005059    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:33.005070    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:28:33.046978    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:33.046986    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:33.128228    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:28:33.128240    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:28:33.142285    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:28:33.142296    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:28:35.657478    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:40.658535    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:40.659004    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:40.689151    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:28:40.689285    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:40.708137    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:28:40.708245    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:40.724180    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:28:40.724289    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:40.744924    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:28:40.744999    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:40.755237    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:28:40.755307    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:40.766349    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:28:40.766419    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:40.776641    8653 logs.go:276] 0 containers: []
	W0731 12:28:40.776653    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:40.776721    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:40.787442    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:28:40.787458    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:28:40.787465    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:40.800112    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:28:40.800123    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:28:40.818522    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:28:40.818533    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:28:40.831234    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:28:40.831247    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:28:40.843679    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:28:40.843690    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:28:40.859672    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:40.859682    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:40.886958    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:28:40.886968    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:28:40.898483    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:28:40.898495    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:28:40.910105    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:28:40.910115    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:28:40.927913    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:28:40.927923    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:28:40.939097    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:40.939107    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:28:40.981367    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:40.981376    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:40.985959    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:40.985968    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:41.021990    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:28:41.022001    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:28:41.036018    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:28:41.036028    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:28:41.050143    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:28:41.050154    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:28:41.063989    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:28:41.064006    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:28:41.078577    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:28:41.078591    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:28:41.090123    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:28:41.090135    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:28:43.603736    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:48.605938    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:48.606043    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:48.617058    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:28:48.617134    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:48.628341    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:28:48.628415    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:48.639959    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:28:48.640034    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:48.650679    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:28:48.650747    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:48.661558    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:28:48.661629    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:48.672777    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:28:48.672855    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:48.683313    8653 logs.go:276] 0 containers: []
	W0731 12:28:48.683323    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:48.683377    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:48.694413    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:28:48.694430    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:28:48.694436    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:28:48.706033    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:28:48.706048    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:28:48.724132    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:28:48.724146    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:28:48.736658    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:28:48.736671    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:28:48.748287    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:28:48.748301    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:28:48.759562    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:48.759573    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:48.787212    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:48.787220    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:48.821749    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:28:48.821760    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:28:48.835529    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:28:48.835539    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:28:48.850916    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:28:48.850926    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:28:48.862642    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:28:48.862655    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:28:48.877856    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:28:48.877873    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:28:48.893317    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:28:48.893330    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:28:48.905390    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:28:48.905400    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:48.917656    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:28:48.917672    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:28:48.933724    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:48.933734    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:28:48.973961    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:48.973969    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:48.978615    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:28:48.978626    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:28:48.993026    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:28:48.993036    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:28:51.506252    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:56.508842    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:56.509165    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:56.538237    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:28:56.538358    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:56.555564    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:28:56.555655    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:56.569514    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:28:56.569586    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:56.581448    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:28:56.581523    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:56.592285    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:28:56.592360    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:56.603075    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:28:56.603146    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:56.618118    8653 logs.go:276] 0 containers: []
	W0731 12:28:56.618129    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:56.618191    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:56.628739    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:28:56.628757    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:28:56.628763    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:28:56.643137    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:28:56.643150    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:28:56.654606    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:28:56.654617    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:28:56.673000    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:28:56.673011    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:28:56.685103    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:28:56.685116    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:28:56.697026    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:56.697038    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:56.701686    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:56.701693    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:56.727312    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:28:56.727326    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:56.739714    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:28:56.739724    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:28:56.750753    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:28:56.750765    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:28:56.762335    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:28:56.762349    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:28:56.773861    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:56.773874    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:56.811225    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:28:56.811236    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:28:56.828563    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:28:56.828574    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:28:56.841119    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:28:56.841130    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:28:56.860740    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:28:56.860750    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:28:56.874932    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:28:56.874946    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:28:56.886474    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:28:56.886486    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:28:56.898052    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:56.898065    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:28:59.439955    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:04.442468    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:04.442733    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:04.473912    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:04.474043    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:04.493045    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:04.493129    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:04.506925    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:04.507021    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:04.518964    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:04.519029    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:04.529862    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:04.529938    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:04.541045    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:04.541119    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:04.551251    8653 logs.go:276] 0 containers: []
	W0731 12:29:04.551265    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:04.551324    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:04.562293    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:04.562306    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:04.562313    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:04.574625    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:04.574636    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:04.593351    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:04.593362    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:04.605438    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:04.605448    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:04.632611    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:04.632620    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:04.675089    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:04.675101    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:04.688872    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:04.688883    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:04.703768    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:04.703778    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:04.715830    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:04.715842    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:04.732537    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:04.732552    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:04.744493    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:04.744505    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:04.755720    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:04.755730    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:04.767222    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:04.767233    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:04.778799    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:04.778812    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:04.791223    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:04.791235    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:04.804645    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:04.804659    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:04.809254    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:04.809263    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:04.846905    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:04.846920    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:04.862568    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:04.862580    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:07.375834    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:12.378337    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:12.378604    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:12.401976    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:12.402106    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:12.417655    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:12.417737    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:12.430540    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:12.430623    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:12.441518    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:12.441599    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:12.451469    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:12.451535    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:12.462043    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:12.462116    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:12.472368    8653 logs.go:276] 0 containers: []
	W0731 12:29:12.472380    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:12.472438    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:12.482984    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:12.483001    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:12.483007    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:12.496693    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:12.496704    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:12.510428    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:12.510440    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:12.521778    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:12.521790    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:12.533224    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:12.533234    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:12.551141    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:12.551154    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:12.562795    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:12.562810    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:12.604349    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:12.604357    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:12.640840    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:12.640850    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:12.645614    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:12.645621    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:12.671908    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:12.671920    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:12.683272    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:12.683283    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:12.695994    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:12.696005    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:12.708577    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:12.708587    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:12.720502    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:12.720512    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:12.732898    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:12.732908    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:12.743970    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:12.743981    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:12.759659    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:12.759670    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:12.774159    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:12.774170    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:15.287458    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:20.289593    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:20.289729    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:20.301848    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:20.301930    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:20.312122    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:20.312195    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:20.322594    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:20.322667    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:20.333340    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:20.333411    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:20.344154    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:20.344220    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:20.355114    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:20.355189    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:20.365831    8653 logs.go:276] 0 containers: []
	W0731 12:29:20.365844    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:20.365902    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:20.380951    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:20.380968    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:20.380973    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:20.394987    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:20.395001    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:20.408526    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:20.408539    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:20.419979    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:20.419993    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:20.431079    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:20.431093    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:20.465970    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:20.465982    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:20.479266    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:20.479276    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:20.491739    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:20.491749    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:20.502750    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:20.502761    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:20.528275    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:20.528284    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:20.540719    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:20.540733    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:20.552567    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:20.552579    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:20.570110    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:20.570120    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:20.588345    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:20.588356    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:20.600048    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:20.600059    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:20.611040    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:20.611051    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:20.622262    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:20.622273    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:20.663585    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:20.663599    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:20.668341    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:20.668350    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:23.184888    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:28.187198    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:28.187617    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:28.239015    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:28.239152    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:28.261083    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:28.261175    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:28.275914    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:28.276002    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:28.287472    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:28.287545    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:28.298633    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:28.298714    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:28.309416    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:28.309497    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:28.319969    8653 logs.go:276] 0 containers: []
	W0731 12:29:28.319981    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:28.320041    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:28.330825    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:28.330840    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:28.330845    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:28.343779    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:28.343789    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:28.357501    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:28.357510    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:28.369608    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:28.369619    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:28.382319    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:28.382330    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:28.393870    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:28.393883    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:28.411889    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:28.411903    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:28.416447    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:28.416454    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:28.430558    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:28.430569    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:28.441959    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:28.441970    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:28.454444    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:28.454455    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:28.465572    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:28.465583    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:28.476693    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:28.476705    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:28.489825    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:28.489835    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:28.529163    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:28.529175    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:28.565945    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:28.565960    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:28.580674    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:28.580686    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:28.596659    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:28.596673    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:28.609674    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:28.609686    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:31.134851    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:36.137066    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:36.137184    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:36.164842    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:36.164921    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:36.176102    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:36.176172    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:36.186706    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:36.186777    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:36.196965    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:36.197025    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:36.207492    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:36.207564    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:36.218338    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:36.218398    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:36.233324    8653 logs.go:276] 0 containers: []
	W0731 12:29:36.233339    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:36.233398    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:36.251730    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:36.251746    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:36.251752    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:36.256182    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:36.256189    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:36.291217    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:36.291229    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:36.312843    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:36.312854    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:36.337262    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:36.337272    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:36.349590    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:36.349599    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:36.363595    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:36.363606    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:36.374481    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:36.374493    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:36.388052    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:36.388064    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:36.399336    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:36.399347    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:36.413664    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:36.413675    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:36.424965    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:36.424977    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:36.435867    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:36.435878    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:36.447834    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:36.447847    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:36.487124    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:36.487135    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:36.501006    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:36.501019    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:36.512594    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:36.512605    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:36.524769    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:36.524779    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:36.537189    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:36.537199    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:39.053582    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:44.055694    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:44.055837    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:44.072961    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:44.073046    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:44.086157    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:44.086230    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:44.097723    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:44.097792    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:44.108750    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:44.108815    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:44.119716    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:44.119784    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:44.129941    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:44.130016    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:44.140470    8653 logs.go:276] 0 containers: []
	W0731 12:29:44.140481    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:44.140538    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:44.151934    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:44.151949    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:44.151955    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:44.163949    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:44.163961    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:44.168532    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:44.168542    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:44.183634    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:44.183644    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:44.194897    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:44.194908    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:44.212552    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:44.212562    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:44.224012    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:44.224023    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:44.239240    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:44.239252    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:44.279194    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:44.279206    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:44.292893    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:44.292902    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:44.304257    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:44.304268    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:44.319373    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:44.319384    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:44.356581    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:44.356591    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:44.371485    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:44.371498    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:44.383634    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:44.383647    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:44.407653    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:44.407664    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:44.421372    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:44.421383    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:44.433368    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:44.433384    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:44.445402    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:44.445413    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:46.958698    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:51.960804    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:51.960978    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:51.978513    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:51.978607    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:51.991421    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:51.991489    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:52.004379    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:52.004451    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:52.015344    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:52.015418    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:52.026224    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:52.026297    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:52.036774    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:52.036847    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:52.047183    8653 logs.go:276] 0 containers: []
	W0731 12:29:52.047193    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:52.047254    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:52.062537    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:52.062551    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:52.062557    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:52.075050    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:52.075060    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:52.089557    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:52.089571    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:52.101412    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:52.101423    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:52.114251    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:52.114266    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:52.126017    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:52.126030    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:52.161970    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:52.161982    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:52.176706    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:52.176722    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:52.188986    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:52.188998    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:52.215081    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:52.215091    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:52.219524    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:52.219533    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:52.233307    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:52.233317    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:52.245520    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:52.245535    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:52.263671    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:52.263683    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:52.276932    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:52.276948    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:52.319392    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:52.319400    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:52.330867    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:52.330880    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:52.342734    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:52.342744    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:52.354631    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:52.354646    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:54.868923    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:59.871005    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:59.871110    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:59.883767    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:59.883847    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:59.895576    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:59.895653    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:59.907152    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:59.907225    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:59.922450    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:59.922513    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:59.933530    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:59.933595    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:59.944718    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:59.944807    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:59.955256    8653 logs.go:276] 0 containers: []
	W0731 12:29:59.955268    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:59.955331    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:59.966288    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:59.966307    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:59.966313    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:00.006311    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:00.006326    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:00.010845    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:00.010855    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:00.025958    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:00.025971    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:00.049875    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:00.049888    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:00.065264    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:00.065275    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:00.078281    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:00.078294    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:00.121954    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:00.121966    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:00.133818    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:00.133829    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:00.145979    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:00.145991    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:00.163914    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:00.163928    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:00.175924    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:00.175937    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:00.190041    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:00.190052    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:00.215345    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:00.215356    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:00.228438    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:00.228451    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:00.240303    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:00.240313    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:00.251969    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:00.251981    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:00.263388    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:00.263399    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:00.280642    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:00.280656    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:02.794039    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:07.796146    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:07.796404    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:07.827764    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:07.827891    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:07.852953    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:07.853027    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:07.865227    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:07.865294    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:07.875955    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:07.876032    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:07.886980    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:07.887052    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:07.898871    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:07.898935    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:07.909194    8653 logs.go:276] 0 containers: []
	W0731 12:30:07.909204    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:07.909256    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:07.919976    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:07.919989    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:07.919994    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:07.956036    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:07.956050    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:07.971340    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:07.971350    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:07.983558    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:07.983569    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:07.994742    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:07.994752    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:07.999010    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:07.999016    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:08.012785    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:08.012795    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:08.026036    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:08.026052    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:08.067790    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:08.067805    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:08.081990    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:08.082000    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:08.095702    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:08.095714    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:08.113174    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:08.113186    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:08.124973    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:08.124984    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:08.150354    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:08.150372    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:08.164103    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:08.164113    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:08.175588    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:08.175603    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:08.187223    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:08.187238    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:08.198397    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:08.198411    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:08.211058    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:08.211072    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:10.726727    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:15.728906    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:15.729086    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:15.739943    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:15.740017    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:15.751287    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:15.751348    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:15.761830    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:15.761906    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:15.772743    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:15.772811    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:15.782619    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:15.782681    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:15.793040    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:15.793110    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:15.803399    8653 logs.go:276] 0 containers: []
	W0731 12:30:15.803411    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:15.803469    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:15.813654    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:15.813670    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:15.813675    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:15.827055    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:15.827067    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:15.838982    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:15.838995    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:15.854570    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:15.854586    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:15.871435    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:15.871445    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:15.882661    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:15.882672    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:15.905348    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:15.905356    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:15.944455    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:15.944468    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:15.958635    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:15.958646    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:15.971247    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:15.971259    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:15.983239    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:15.983252    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:15.994394    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:15.994405    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:16.029441    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:16.029452    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:16.041168    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:16.041179    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:16.053091    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:16.053102    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:16.070228    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:16.070239    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:16.082150    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:16.082163    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:16.094361    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:16.094372    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:16.098760    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:16.098766    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:18.622437    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:23.624684    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:23.624921    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:23.650986    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:23.651111    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:23.668476    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:23.668565    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:23.682113    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:23.682185    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:23.694262    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:23.694337    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:23.704507    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:23.704577    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:23.715136    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:23.715216    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:23.726502    8653 logs.go:276] 0 containers: []
	W0731 12:30:23.726515    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:23.726573    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:23.736874    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:23.736892    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:23.736897    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:23.779030    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:23.779038    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:23.790656    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:23.790666    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:23.802101    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:23.802116    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:23.848229    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:23.848240    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:23.862287    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:23.862298    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:23.875805    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:23.875816    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:23.886942    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:23.886953    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:23.909466    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:23.909473    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:23.921430    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:23.921441    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:23.925707    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:23.925714    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:23.945830    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:23.945843    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:23.960578    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:23.960588    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:23.971943    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:23.971956    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:23.990254    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:23.990265    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:24.001879    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:24.001890    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:24.014335    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:24.014346    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:24.025830    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:24.025842    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:24.038623    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:24.038633    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:26.551632    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:31.553959    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:31.554349    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:31.587711    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:31.587847    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:31.607207    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:31.607309    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:31.621245    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:31.621329    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:31.633507    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:31.633592    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:31.644433    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:31.644507    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:31.656479    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:31.656554    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:31.667080    8653 logs.go:276] 0 containers: []
	W0731 12:30:31.667090    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:31.667152    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:31.678030    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:31.678046    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:31.678051    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:31.682947    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:31.682954    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:31.719959    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:31.719972    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:31.734756    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:31.734765    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:31.746616    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:31.746629    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:31.759030    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:31.759041    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:31.775659    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:31.775671    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:31.817568    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:31.817576    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:31.833841    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:31.833852    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:31.845933    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:31.845944    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:31.858728    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:31.858737    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:31.871469    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:31.871479    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:31.887708    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:31.887718    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:31.899252    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:31.899263    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:31.911258    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:31.911268    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:31.928410    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:31.928420    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:31.939678    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:31.939689    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:31.964058    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:31.964066    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:31.978855    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:31.978864    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:34.491593    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:39.493456    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:39.493666    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:39.517988    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:39.518117    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:39.534777    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:39.534865    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:39.547829    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:39.547907    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:39.567337    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:39.567409    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:39.578701    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:39.578777    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:39.589552    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:39.589625    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:39.600027    8653 logs.go:276] 0 containers: []
	W0731 12:30:39.600039    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:39.600107    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:39.610995    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:39.611012    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:39.611019    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:39.615366    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:39.615372    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:39.627064    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:39.627077    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:39.637939    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:39.637950    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:39.650020    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:39.650029    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:39.664103    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:39.664114    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:39.704242    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:39.704253    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:39.722682    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:39.722693    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:39.734004    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:39.734016    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:39.745665    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:39.745677    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:39.757576    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:39.757587    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:39.775731    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:39.775741    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:39.800138    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:39.800146    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:39.813541    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:39.813554    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:39.851945    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:39.851961    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:39.867576    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:39.867587    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:39.879317    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:39.879329    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:39.890670    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:39.890682    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:39.902148    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:39.902159    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:42.418125    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:47.420185    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:47.420418    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:47.440376    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:47.440475    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:47.453980    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:47.454053    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:47.465547    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:47.465625    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:47.476215    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:47.476281    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:47.487092    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:47.487163    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:47.497220    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:47.497285    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:47.507374    8653 logs.go:276] 0 containers: []
	W0731 12:30:47.507384    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:47.507436    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:47.517343    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:47.517363    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:47.517374    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:47.552850    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:47.552863    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:47.564217    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:47.564228    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:47.577138    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:47.577150    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:47.588948    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:47.588958    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:47.601091    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:47.601103    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:47.642024    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:47.642035    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:47.646351    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:47.646357    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:47.660123    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:47.660138    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:47.674659    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:47.674669    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:47.691664    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:47.691675    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:47.717116    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:47.717126    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:47.729449    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:47.729460    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:47.743941    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:47.743952    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:47.756326    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:47.756337    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:47.768049    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:47.768064    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:47.779923    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:47.779933    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:47.791430    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:47.791441    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:47.803193    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:47.803204    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:50.322345    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:55.324796    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:55.324998    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:55.341596    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:55.341686    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:55.354442    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:55.354518    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:55.365933    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:55.366011    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:55.377079    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:55.377150    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:55.387534    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:55.387606    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:55.397553    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:55.397621    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:55.407736    8653 logs.go:276] 0 containers: []
	W0731 12:30:55.407748    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:55.407804    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:55.418694    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:55.418710    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:55.418716    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:55.439172    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:55.439186    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:55.450434    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:55.450445    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:55.461988    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:55.462000    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:55.500809    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:55.500818    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:55.505560    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:55.505569    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:55.518077    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:55.518087    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:55.529322    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:55.529334    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:55.541821    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:55.541832    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:55.561740    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:55.561751    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:55.584146    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:55.584155    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:55.625333    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:55.625345    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:55.648901    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:55.648913    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:55.660846    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:55.660860    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:55.675520    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:55.675534    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:55.690886    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:55.690899    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:55.702073    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:55.702084    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:55.713886    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:55.713900    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:55.727612    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:55.727622    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:58.244898    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:03.247184    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:03.247658    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:03.290440    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:31:03.290583    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:03.311829    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:31:03.311937    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:03.327852    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:31:03.327936    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:03.340107    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:31:03.340178    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:03.351250    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:31:03.351324    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:03.362169    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:31:03.362246    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:03.373022    8653 logs.go:276] 0 containers: []
	W0731 12:31:03.373036    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:03.373092    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:03.384733    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:31:03.384749    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:31:03.384754    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:31:03.404184    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:31:03.404194    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:31:03.415789    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:31:03.415802    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:31:03.431579    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:31:03.431590    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:31:03.443203    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:31:03.443213    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:31:03.455598    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:31:03.455609    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:31:03.467014    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:31:03.467024    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:31:03.479828    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:03.479838    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:03.504477    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:03.504488    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:03.547608    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:03.547625    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:03.583786    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:31:03.583798    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:31:03.596365    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:31:03.596376    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:31:03.610849    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:31:03.610861    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:31:03.622092    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:31:03.622104    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:31:03.636647    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:31:03.636657    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:31:03.648361    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:31:03.648373    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:31:03.665203    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:31:03.665213    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:03.676919    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:03.676930    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:03.681666    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:31:03.681673    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:31:06.196876    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:11.199495    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:11.199776    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:11.229156    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:31:11.229292    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:11.248258    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:31:11.248366    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:11.263121    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:31:11.263201    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:11.275186    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:31:11.275259    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:11.285906    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:31:11.285976    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:11.297332    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:31:11.297410    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:11.308774    8653 logs.go:276] 0 containers: []
	W0731 12:31:11.308784    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:11.308850    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:11.318971    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:31:11.318986    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:31:11.318992    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:31:11.330311    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:31:11.330324    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:31:11.342535    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:31:11.342546    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:31:11.359922    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:31:11.359936    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:31:11.374561    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:11.374576    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:11.414265    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:11.414275    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:11.449788    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:31:11.449802    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:31:11.463797    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:31:11.463807    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:31:11.475180    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:31:11.475190    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:31:11.485942    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:11.485953    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:11.508091    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:11.508100    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:11.512988    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:31:11.512997    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:31:11.525717    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:31:11.525729    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:31:11.540882    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:31:11.540892    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:31:11.553008    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:31:11.553021    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:11.565027    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:31:11.565037    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:31:11.579764    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:31:11.579780    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:31:11.594882    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:31:11.594896    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:31:11.607010    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:31:11.607022    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:31:14.124010    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:19.126125    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:19.126468    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:19.158180    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:31:19.158309    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:19.176137    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:31:19.176233    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:19.190028    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:31:19.190102    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:19.201426    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:31:19.201496    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:19.213150    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:31:19.213220    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:19.224292    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:31:19.224355    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:19.234709    8653 logs.go:276] 0 containers: []
	W0731 12:31:19.234721    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:19.234784    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:19.250562    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:31:19.250579    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:19.250587    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:19.255023    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:31:19.255033    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:31:19.272165    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:31:19.272177    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:31:19.283615    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:31:19.283626    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:31:19.297329    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:31:19.297340    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:31:19.311806    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:31:19.311819    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:31:19.323100    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:31:19.323110    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:31:19.334877    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:31:19.334888    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:31:19.346945    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:19.346955    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:19.370530    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:31:19.370539    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:31:19.388424    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:31:19.388434    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:31:19.399958    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:31:19.399969    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:31:19.411245    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:31:19.411259    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:31:19.422767    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:31:19.422780    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:31:19.434238    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:19.434249    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:19.476393    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:19.476400    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:19.510917    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:31:19.510927    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:31:19.523328    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:31:19.523339    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:31:19.535573    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:31:19.535584    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:22.050692    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:27.051847    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:27.052104    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:27.074944    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:31:27.075059    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:27.091568    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:31:27.091662    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:27.105011    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:31:27.105094    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:27.116394    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:31:27.116470    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:27.126738    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:31:27.126809    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:27.137166    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:31:27.137235    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:27.147406    8653 logs.go:276] 0 containers: []
	W0731 12:31:27.147419    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:27.147476    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:27.158220    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:31:27.158238    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:27.158244    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:27.163003    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:31:27.163010    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:31:27.175405    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:31:27.175416    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:31:27.189174    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:31:27.189187    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:31:27.200839    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:31:27.200852    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:31:27.219087    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:31:27.219101    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:31:27.230853    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:31:27.230864    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:31:27.242797    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:31:27.242808    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:31:27.255090    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:31:27.255100    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:31:27.266385    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:31:27.266397    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:31:27.277646    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:27.277657    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:27.313759    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:31:27.313770    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:31:27.328867    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:31:27.328877    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:31:27.347054    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:31:27.347064    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:27.359111    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:27.359122    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:27.400817    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:31:27.400827    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:31:27.413503    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:31:27.413517    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:31:27.430900    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:31:27.430909    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:31:27.441897    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:27.441909    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:29.966871    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:34.968959    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:34.969042    8653 kubeadm.go:597] duration metric: took 4m5.430321167s to restartPrimaryControlPlane
	W0731 12:31:34.969102    8653 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:31:34.969126    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:31:36.077179    8653 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.108074625s)
	I0731 12:31:36.077238    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:31:36.082600    8653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:31:36.085803    8653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:31:36.088707    8653 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:31:36.088716    8653 kubeadm.go:157] found existing configuration files:
	
	I0731 12:31:36.088762    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/admin.conf
	I0731 12:31:36.092454    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:31:36.092509    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:31:36.095897    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/kubelet.conf
	I0731 12:31:36.098912    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:31:36.098959    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:31:36.102631    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/controller-manager.conf
	I0731 12:31:36.105010    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:31:36.105032    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:31:36.107827    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/scheduler.conf
	I0731 12:31:36.110847    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:31:36.110882    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:31:36.113603    8653 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:31:36.131651    8653 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:31:36.131694    8653 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:31:36.180328    8653 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:31:36.180389    8653 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:31:36.180471    8653 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:31:36.232837    8653 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:31:36.238449    8653 out.go:204]   - Generating certificates and keys ...
	I0731 12:31:36.238488    8653 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:31:36.238531    8653 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:31:36.238575    8653 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:31:36.238613    8653 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:31:36.238737    8653 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:31:36.238839    8653 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:31:36.238881    8653 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:31:36.238915    8653 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:31:36.238968    8653 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:31:36.239053    8653 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:31:36.239075    8653 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:31:36.239105    8653 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:31:36.431300    8653 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:31:36.465706    8653 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:31:36.524288    8653 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:31:36.760169    8653 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:31:36.789345    8653 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:31:36.789620    8653 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:31:36.789697    8653 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:31:36.885020    8653 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:31:36.888791    8653 out.go:204]   - Booting up control plane ...
	I0731 12:31:36.888867    8653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:31:36.888909    8653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:31:36.888973    8653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:31:36.889026    8653 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:31:36.889107    8653 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:31:41.390698    8653 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503783 seconds
	I0731 12:31:41.390778    8653 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:31:41.394894    8653 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:31:41.914752    8653 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:31:41.915134    8653 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-992000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:31:42.418233    8653 kubeadm.go:310] [bootstrap-token] Using token: kogid8.knahutph1x867ol2
	I0731 12:31:42.421756    8653 out.go:204]   - Configuring RBAC rules ...
	I0731 12:31:42.421808    8653 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:31:42.421848    8653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:31:42.423891    8653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:31:42.428369    8653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:31:42.429205    8653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:31:42.429945    8653 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:31:42.433058    8653 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:31:42.632834    8653 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:31:42.824894    8653 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:31:42.825322    8653 kubeadm.go:310] 
	I0731 12:31:42.825357    8653 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:31:42.825363    8653 kubeadm.go:310] 
	I0731 12:31:42.825414    8653 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:31:42.825420    8653 kubeadm.go:310] 
	I0731 12:31:42.825436    8653 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:31:42.825468    8653 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:31:42.825502    8653 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:31:42.825507    8653 kubeadm.go:310] 
	I0731 12:31:42.825539    8653 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:31:42.825547    8653 kubeadm.go:310] 
	I0731 12:31:42.825573    8653 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:31:42.825578    8653 kubeadm.go:310] 
	I0731 12:31:42.825605    8653 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:31:42.825652    8653 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:31:42.825699    8653 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:31:42.825702    8653 kubeadm.go:310] 
	I0731 12:31:42.825757    8653 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:31:42.825802    8653 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:31:42.825806    8653 kubeadm.go:310] 
	I0731 12:31:42.825853    8653 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kogid8.knahutph1x867ol2 \
	I0731 12:31:42.825916    8653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:12522336a0e1241307a19e7db623c316cbfcdae7801755c9cc730f6bb46269a5 \
	I0731 12:31:42.825928    8653 kubeadm.go:310] 	--control-plane 
	I0731 12:31:42.825931    8653 kubeadm.go:310] 
	I0731 12:31:42.825973    8653 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:31:42.825976    8653 kubeadm.go:310] 
	I0731 12:31:42.826018    8653 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kogid8.knahutph1x867ol2 \
	I0731 12:31:42.826070    8653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:12522336a0e1241307a19e7db623c316cbfcdae7801755c9cc730f6bb46269a5 
	I0731 12:31:42.826127    8653 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:31:42.826135    8653 cni.go:84] Creating CNI manager for ""
	I0731 12:31:42.826143    8653 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:31:42.829893    8653 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:31:42.836905    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:31:42.839796    8653 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:31:42.844502    8653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:31:42.844543    8653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:31:42.844563    8653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-992000 minikube.k8s.io/updated_at=2024_07_31T12_31_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=running-upgrade-992000 minikube.k8s.io/primary=true
	I0731 12:31:42.897899    8653 ops.go:34] apiserver oom_adj: -16
	I0731 12:31:42.897946    8653 kubeadm.go:1113] duration metric: took 53.439417ms to wait for elevateKubeSystemPrivileges
	I0731 12:31:42.897974    8653 kubeadm.go:394] duration metric: took 4m13.373701792s to StartCluster
	I0731 12:31:42.897985    8653 settings.go:142] acquiring lock: {Name:mkc78613f561547480fb9245c5115e473f0ba2a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:31:42.898068    8653 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:31:42.898478    8653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/kubeconfig: {Name:mk9ca083b2af3b573139e6661162870891ccebe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:31:42.898678    8653 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:31:42.898689    8653 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:31:42.898726    8653 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-992000"
	I0731 12:31:42.898756    8653 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-992000"
	W0731 12:31:42.898761    8653 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:31:42.898761    8653 config.go:182] Loaded profile config "running-upgrade-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:31:42.898772    8653 host.go:66] Checking if "running-upgrade-992000" exists ...
	I0731 12:31:42.898795    8653 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-992000"
	I0731 12:31:42.898807    8653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-992000"
	I0731 12:31:42.899682    8653 kapi.go:59] client config for running-upgrade-992000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105dc81b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:31:42.899803    8653 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-992000"
	W0731 12:31:42.899807    8653 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:31:42.899819    8653 host.go:66] Checking if "running-upgrade-992000" exists ...
	I0731 12:31:42.902824    8653 out.go:177] * Verifying Kubernetes components...
	I0731 12:31:42.903169    8653 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:31:42.906974    8653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:31:42.906981    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	I0731 12:31:42.910742    8653 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:31:42.914781    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:31:42.918786    8653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:31:42.918793    8653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:31:42.918799    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	I0731 12:31:43.008732    8653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:31:43.014444    8653 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:31:43.014491    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:31:43.018323    8653 api_server.go:72] duration metric: took 119.63725ms to wait for apiserver process to appear ...
	I0731 12:31:43.018331    8653 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:31:43.018337    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:43.047462    8653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:31:43.069759    8653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:31:48.018328    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:48.018351    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:53.020054    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:53.020074    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:58.020071    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:58.020091    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:03.020174    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:03.020204    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:08.020348    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:08.020375    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:13.020630    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:13.020679    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:32:13.373151    8653 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:32:13.377208    8653 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:32:13.389247    8653 addons.go:510] duration metric: took 30.491599167s for enable addons: enabled=[storage-provisioner]
	I0731 12:32:18.021194    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:18.021283    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:23.021973    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:23.022006    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:28.022834    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:28.022874    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:33.024054    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:33.024097    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:38.025525    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:38.025583    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:43.027403    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:43.027511    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:43.041034    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:32:43.041123    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:43.053340    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:32:43.053414    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:43.063985    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:32:43.064061    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:43.074190    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:32:43.074261    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:43.084900    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:32:43.084967    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:43.095566    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:32:43.095634    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:43.105813    8653 logs.go:276] 0 containers: []
	W0731 12:32:43.105824    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:43.105880    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:43.116110    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:32:43.116125    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:43.116133    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:43.120914    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:43.120921    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:43.157433    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:32:43.157444    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:32:43.172203    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:32:43.172216    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:32:43.188614    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:32:43.188625    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:32:43.200385    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:32:43.200396    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:32:43.212526    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:43.212537    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:43.237556    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:32:43.237567    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:43.249213    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:43.249223    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:43.283968    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:32:43.283981    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:32:43.295112    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:32:43.295122    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:32:43.309998    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:32:43.310008    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:32:43.330118    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:32:43.330131    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:32:45.843873    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:50.845974    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:50.846139    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:50.861301    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:32:50.861391    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:50.873775    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:32:50.873853    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:50.884658    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:32:50.884732    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:50.894696    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:32:50.894764    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:50.905165    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:32:50.905242    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:50.915923    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:32:50.915995    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:50.926302    8653 logs.go:276] 0 containers: []
	W0731 12:32:50.926312    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:50.926377    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:50.936552    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:32:50.936567    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:32:50.936574    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:32:50.950329    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:32:50.950343    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:32:50.961636    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:32:50.961649    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:32:50.977770    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:32:50.977784    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:32:50.990850    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:50.990861    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:51.016931    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:51.016939    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:51.052395    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:51.052402    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:51.057103    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:51.057113    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:51.093852    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:32:51.093864    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:32:51.108800    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:32:51.108812    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:32:51.120429    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:32:51.120440    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:32:51.132542    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:32:51.132554    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:32:51.150004    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:32:51.150016    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:53.664620    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:58.665882    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:58.665998    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:58.678706    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:32:58.678789    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:58.689652    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:32:58.689733    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:58.701991    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:32:58.702063    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:58.715971    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:32:58.716042    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:58.726573    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:32:58.726648    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:58.736910    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:32:58.736977    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:58.746734    8653 logs.go:276] 0 containers: []
	W0731 12:32:58.746745    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:58.746805    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:58.757518    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:32:58.757536    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:58.757543    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:58.762350    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:32:58.762360    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:32:58.776748    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:32:58.776762    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:32:58.788408    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:58.788419    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:58.812578    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:32:58.812585    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:58.824256    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:58.824267    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:58.859690    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:32:58.859699    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:32:58.874539    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:32:58.874552    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:32:58.892120    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:32:58.892133    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:32:58.903734    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:32:58.903744    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:32:58.915202    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:32:58.915214    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:32:58.926689    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:32:58.926699    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:32:58.943898    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:58.943910    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:01.481152    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:06.481451    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:06.481639    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:06.495277    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:06.495354    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:06.506492    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:06.506567    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:06.517281    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:06.517357    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:06.528321    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:06.528395    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:06.547484    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:06.547554    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:06.557871    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:06.557937    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:06.568111    8653 logs.go:276] 0 containers: []
	W0731 12:33:06.568124    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:06.568187    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:06.578765    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:06.578779    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:06.578788    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:06.592690    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:06.592703    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:06.610311    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:06.610321    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:06.635294    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:06.635305    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:06.670258    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:06.670269    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:06.685190    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:06.685203    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:06.698920    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:06.698930    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:06.714362    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:06.714373    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:06.726027    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:06.726040    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:06.738341    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:06.738356    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:06.775347    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:06.775358    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:06.780081    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:06.780091    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:06.792827    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:06.792839    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:09.306485    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:14.306971    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:14.307208    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:14.324256    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:14.324341    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:14.337467    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:14.337549    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:14.348289    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:14.348362    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:14.364829    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:14.364902    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:14.375514    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:14.375587    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:14.385930    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:14.385997    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:14.396260    8653 logs.go:276] 0 containers: []
	W0731 12:33:14.396272    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:14.396336    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:14.406752    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:14.406773    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:14.406778    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:14.423977    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:14.423989    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:14.435508    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:14.435520    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:14.461644    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:14.461660    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:14.499232    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:14.499245    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:14.504202    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:14.504210    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:14.518048    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:14.518060    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:14.529809    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:14.529820    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:14.544912    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:14.544925    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:14.556836    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:14.556848    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:14.599154    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:14.599164    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:14.617335    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:14.617347    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:14.629016    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:14.629027    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:17.147294    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:22.149419    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:22.149616    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:22.161848    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:22.161928    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:22.172654    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:22.172733    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:22.183422    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:22.183497    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:22.194544    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:22.194615    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:22.208840    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:22.208912    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:22.220466    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:22.220547    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:22.232097    8653 logs.go:276] 0 containers: []
	W0731 12:33:22.232108    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:22.232184    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:22.243314    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:22.243331    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:22.243337    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:22.269565    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:22.269580    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:22.307480    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:22.307495    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:22.321042    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:22.321051    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:22.333906    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:22.333914    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:22.353381    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:22.353395    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:22.370522    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:22.370535    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:22.387136    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:22.387149    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:22.399636    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:22.399648    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:22.411626    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:22.411637    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:22.416639    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:22.416651    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:22.453994    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:22.454005    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:22.469297    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:22.469308    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:24.988747    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:29.990772    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:29.990913    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:30.007812    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:30.007908    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:30.021492    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:30.021564    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:30.037832    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:30.037917    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:30.049228    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:30.049309    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:30.060274    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:30.060340    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:30.071121    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:30.071189    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:30.081981    8653 logs.go:276] 0 containers: []
	W0731 12:33:30.081995    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:30.082067    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:30.093109    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:30.093123    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:30.093129    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:30.097775    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:30.097784    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:30.112708    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:30.112721    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:30.133394    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:30.133408    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:30.148587    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:30.148597    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:30.160705    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:30.160717    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:30.172671    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:30.172683    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:30.208288    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:30.208299    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:30.227056    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:30.227067    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:30.239300    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:30.239313    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:30.254094    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:30.254105    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:30.272483    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:30.272495    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:30.297197    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:30.297208    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:32.837010    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:37.837251    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:37.837427    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:37.849652    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:37.849733    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:37.860879    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:37.860956    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:37.874136    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:37.874208    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:37.886696    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:37.886770    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:37.897329    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:37.897413    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:37.907982    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:37.908052    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:37.918012    8653 logs.go:276] 0 containers: []
	W0731 12:33:37.918029    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:37.918097    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:37.929255    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:37.929269    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:37.929275    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:37.934287    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:37.934295    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:37.973388    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:37.973399    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:37.988446    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:37.988460    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:38.003260    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:38.003274    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:38.015469    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:38.015481    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:38.039101    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:38.039114    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:38.050793    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:38.050809    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:38.084572    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:38.084589    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:38.096515    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:38.096526    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:38.108065    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:38.108077    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:38.125454    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:38.125464    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:38.137593    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:38.137603    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:40.656323    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:45.658465    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:45.658630    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:45.675330    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:45.675436    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:45.693626    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:45.693702    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:45.708579    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:45.708654    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:45.719764    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:45.719839    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:45.729947    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:45.730025    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:45.740836    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:45.740909    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:45.751407    8653 logs.go:276] 0 containers: []
	W0731 12:33:45.751420    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:45.751479    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:45.761899    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:45.761915    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:45.761921    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:45.779945    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:45.779959    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:45.791967    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:45.791978    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:45.796269    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:45.796276    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:45.830208    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:45.830218    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:45.845199    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:45.845212    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:45.857332    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:45.857341    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:45.872322    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:45.872337    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:45.884382    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:45.884392    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:45.920223    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:45.920238    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:45.933787    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:45.933798    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:45.945427    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:45.945440    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:45.956856    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:45.956866    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:48.482072    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:53.484183    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:53.484352    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:53.501122    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:53.501207    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:53.514043    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:53.514123    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:53.526116    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:53.526195    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:53.538075    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:53.538145    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:53.548695    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:53.548774    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:53.559174    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:53.559248    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:53.569510    8653 logs.go:276] 0 containers: []
	W0731 12:33:53.569523    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:53.569589    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:53.580228    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:53.580243    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:53.580249    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:53.615234    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:53.615242    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:53.629687    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:53.629697    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:53.644277    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:53.644292    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:53.655403    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:53.655415    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:53.678404    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:53.678416    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:53.683089    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:53.683095    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:53.719168    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:53.719180    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:53.734245    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:53.734254    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:53.746533    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:53.746543    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:53.758312    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:53.758322    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:53.775555    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:53.775566    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:53.786652    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:53.786663    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:56.300663    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:01.302779    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:01.302893    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:01.315465    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:01.315541    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:01.326335    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:01.326405    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:01.337331    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:01.337405    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:01.348054    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:01.348124    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:01.358581    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:01.358646    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:01.369388    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:01.369453    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:01.379547    8653 logs.go:276] 0 containers: []
	W0731 12:34:01.379558    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:01.379620    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:01.389999    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:01.390018    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:01.390023    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:01.394915    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:01.394923    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:01.409131    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:01.409140    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:01.420723    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:01.420734    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:01.456163    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:01.456172    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:01.467682    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:01.467694    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:01.485436    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:01.485447    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:01.522823    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:01.522840    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:01.537629    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:01.537642    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:01.549701    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:01.549712    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:01.568637    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:01.568649    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:01.580907    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:01.580918    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:01.595302    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:01.595315    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:01.614455    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:01.614468    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:01.625727    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:01.625741    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:04.153366    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:09.154554    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:09.154700    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:09.166860    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:09.166942    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:09.177759    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:09.177832    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:09.188698    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:09.188779    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:09.200272    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:09.200339    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:09.211060    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:09.211140    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:09.222102    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:09.222174    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:09.232887    8653 logs.go:276] 0 containers: []
	W0731 12:34:09.232898    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:09.232963    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:09.243495    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:09.243513    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:09.243519    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:09.260598    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:09.260611    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:09.272150    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:09.272163    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:09.296801    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:09.296813    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:09.331686    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:09.331695    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:09.369645    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:09.369656    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:09.381417    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:09.381427    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:09.386233    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:09.386239    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:09.401372    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:09.401383    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:09.413263    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:09.413273    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:09.425616    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:09.425626    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:09.437342    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:09.437356    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:09.448854    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:09.448868    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:09.460648    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:09.460659    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:09.485337    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:09.485348    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:12.004079    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:17.006308    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:17.006492    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:17.021045    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:17.021127    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:17.032315    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:17.032389    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:17.047924    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:17.048004    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:17.058720    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:17.058793    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:17.069371    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:17.069460    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:17.080347    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:17.080419    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:17.091619    8653 logs.go:276] 0 containers: []
	W0731 12:34:17.091630    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:17.091694    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:17.107022    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:17.107040    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:17.107045    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:17.118849    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:17.118860    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:17.130886    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:17.130897    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:17.164533    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:17.164544    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:17.179020    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:17.179030    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:17.190144    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:17.190155    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:17.207512    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:17.207522    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:17.242676    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:17.242684    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:17.257651    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:17.257665    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:17.269385    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:17.269398    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:17.280726    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:17.280735    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:17.294746    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:17.294759    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:17.306230    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:17.306244    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:17.331575    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:17.331584    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:17.336085    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:17.336093    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:19.849691    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:24.851736    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:24.851837    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:24.862579    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:24.862658    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:24.873861    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:24.873941    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:24.884494    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:24.884570    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:24.894670    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:24.894744    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:24.905377    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:24.905451    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:24.916064    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:24.916132    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:24.926365    8653 logs.go:276] 0 containers: []
	W0731 12:34:24.926380    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:24.926442    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:24.936807    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:24.936827    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:24.936832    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:24.972528    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:24.972539    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:24.986737    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:24.986751    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:24.998284    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:24.998294    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:25.012260    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:25.012270    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:25.023756    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:25.023771    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:25.048386    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:25.048394    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:25.059911    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:25.059920    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:25.072447    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:25.072464    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:25.084380    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:25.084394    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:25.099406    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:25.099419    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:25.117860    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:25.117871    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:25.130143    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:25.130157    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:25.134482    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:25.134492    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:25.174579    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:25.174590    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:27.686896    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:32.689023    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:32.689147    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:32.700423    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:32.700495    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:32.713266    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:32.713337    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:32.724418    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:32.724494    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:32.735278    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:32.735344    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:32.746073    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:32.746147    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:32.756690    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:32.756754    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:32.766728    8653 logs.go:276] 0 containers: []
	W0731 12:34:32.766739    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:32.766796    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:32.777342    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:32.777360    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:32.777366    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:32.813222    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:32.813239    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:32.824939    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:32.824953    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:32.858934    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:32.858942    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:32.870441    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:32.870453    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:32.886679    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:32.886690    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:32.911292    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:32.911301    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:32.922406    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:32.922417    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:32.934450    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:32.934461    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:32.946334    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:32.946345    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:32.960749    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:32.960759    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:32.979027    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:32.979037    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:32.990343    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:32.990354    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:32.995006    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:32.995013    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:33.010001    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:33.010012    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:35.530466    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:40.532580    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:40.532741    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:40.548900    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:40.548978    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:40.561909    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:40.561983    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:40.573017    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:40.573097    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:40.583205    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:40.583271    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:40.593988    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:40.594054    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:40.604495    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:40.604563    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:40.614527    8653 logs.go:276] 0 containers: []
	W0731 12:34:40.614537    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:40.614588    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:40.625281    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:40.625301    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:40.625305    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:40.641181    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:40.641192    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:40.656192    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:40.656204    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:40.679960    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:40.679973    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:40.684805    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:40.684813    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:40.696303    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:40.696316    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:40.710698    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:40.710709    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:40.728349    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:40.728358    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:40.775695    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:40.775707    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:40.789473    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:40.789487    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:40.803597    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:40.803607    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:40.815686    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:40.815696    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:40.851125    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:40.851135    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:40.865344    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:40.865358    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:40.880671    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:40.880682    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:43.395211    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:48.397332    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:48.397494    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:48.410312    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:48.410395    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:48.421460    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:48.421531    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:48.432236    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:48.432315    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:48.442574    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:48.442638    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:48.453120    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:48.453196    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:48.463560    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:48.463633    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:48.474475    8653 logs.go:276] 0 containers: []
	W0731 12:34:48.474487    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:48.474545    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:48.485089    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:48.485108    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:48.485114    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:48.497033    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:48.497045    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:48.508334    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:48.508344    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:48.520412    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:48.520425    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:48.546053    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:48.546069    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:48.560323    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:48.560334    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:48.574386    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:48.574397    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:48.593354    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:48.593365    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:48.597892    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:48.597900    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:48.632331    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:48.632341    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:48.645538    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:48.645548    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:48.658558    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:48.658569    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:48.693953    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:48.693966    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:48.706676    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:48.706687    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:48.724071    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:48.724082    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:51.240835    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:56.242845    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:56.242933    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:56.254848    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:56.254915    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:56.265414    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:56.265489    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:56.276457    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:56.276530    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:56.286629    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:56.286695    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:56.297369    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:56.297430    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:56.308164    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:56.308227    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:56.318730    8653 logs.go:276] 0 containers: []
	W0731 12:34:56.318740    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:56.318797    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:56.329397    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:56.329415    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:56.329421    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:56.365606    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:56.365616    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:56.380371    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:56.380384    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:56.392076    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:56.392089    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:56.404178    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:56.404189    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:56.415997    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:56.416011    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:56.439658    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:56.439665    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:56.443816    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:56.443825    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:56.458054    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:56.458065    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:56.469902    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:56.469912    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:56.481601    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:56.481615    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:56.492969    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:56.492980    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:56.507822    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:56.507834    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:56.526961    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:56.526972    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:56.539165    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:56.539180    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:59.074621    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:04.076405    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:04.076622    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:04.092224    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:35:04.092312    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:04.104409    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:35:04.104490    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:04.115212    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:35:04.115282    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:04.126072    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:35:04.126136    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:04.136891    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:35:04.136955    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:04.147431    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:35:04.147492    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:04.158191    8653 logs.go:276] 0 containers: []
	W0731 12:35:04.158205    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:04.158273    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:04.169269    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:35:04.169287    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:35:04.169293    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:35:04.183306    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:35:04.183317    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:35:04.194773    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:35:04.194783    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:35:04.213236    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:35:04.213245    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:35:04.236367    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:04.236377    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:04.261855    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:35:04.261866    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:04.273623    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:04.273636    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:04.309491    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:35:04.309504    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:35:04.323993    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:35:04.324006    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:35:04.335294    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:35:04.335304    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:35:04.346711    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:35:04.346724    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:35:04.358242    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:04.358255    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:04.392589    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:35:04.392599    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:35:04.407821    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:04.407831    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:04.412500    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:35:04.412507    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:35:06.925804    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:11.926775    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:11.926921    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:11.938259    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:35:11.938341    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:11.949075    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:35:11.949148    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:11.959947    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:35:11.960022    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:11.970487    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:35:11.970557    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:11.980782    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:35:11.980856    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:11.996241    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:35:11.996315    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:12.006364    8653 logs.go:276] 0 containers: []
	W0731 12:35:12.006376    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:12.006435    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:12.017166    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:35:12.017184    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:12.017189    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:12.040573    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:35:12.040580    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:35:12.054161    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:35:12.054171    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:35:12.065768    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:35:12.065781    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:35:12.083105    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:35:12.083115    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:35:12.094918    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:35:12.094930    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:35:12.108002    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:35:12.108014    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:12.120516    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:12.120525    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:12.125523    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:35:12.125529    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:35:12.137192    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:35:12.137203    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:35:12.152420    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:12.152434    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:12.187898    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:35:12.187907    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:35:12.199548    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:12.199557    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:12.242549    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:35:12.242562    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:35:12.261336    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:35:12.261348    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:35:14.775060    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:19.777179    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:19.777414    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:19.794914    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:35:19.795007    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:19.808191    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:35:19.808274    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:19.823179    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:35:19.823252    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:19.833111    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:35:19.833186    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:19.843676    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:35:19.843745    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:19.854830    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:35:19.854897    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:19.865479    8653 logs.go:276] 0 containers: []
	W0731 12:35:19.865491    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:19.865552    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:19.876300    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:35:19.876317    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:35:19.876322    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:35:19.888611    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:19.888624    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:19.925433    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:35:19.925455    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:35:19.940824    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:19.940835    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:19.976512    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:35:19.976524    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:35:19.991056    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:35:19.991068    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:35:20.005982    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:35:20.005994    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:35:20.017615    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:35:20.017626    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:35:20.029807    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:35:20.029823    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:35:20.046109    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:35:20.046120    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:35:20.057445    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:35:20.057457    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:35:20.074765    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:20.074775    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:20.079124    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:20.079133    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:20.101306    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:35:20.101316    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:20.113380    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:35:20.113392    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:35:22.627718    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:27.629861    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:27.630024    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:27.643102    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:35:27.643188    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:27.656430    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:35:27.656498    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:27.671016    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:35:27.671097    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:27.681293    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:35:27.681363    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:27.691943    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:35:27.692010    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:27.705103    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:35:27.705178    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:27.715576    8653 logs.go:276] 0 containers: []
	W0731 12:35:27.715588    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:27.715649    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:27.725882    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:35:27.725900    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:27.725905    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:27.730496    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:35:27.730503    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:35:27.741963    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:35:27.741975    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:35:27.753688    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:35:27.753702    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:35:27.771315    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:35:27.771328    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:27.783109    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:27.783121    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:27.818160    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:27.818170    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:27.856132    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:35:27.856144    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:35:27.870543    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:35:27.870554    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:35:27.881766    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:35:27.881779    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:35:27.894266    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:35:27.894279    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:35:27.906527    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:35:27.906537    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:35:27.926188    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:35:27.926201    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:35:27.942053    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:35:27.942064    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:35:27.954001    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:27.954015    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:30.479977    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:35.482111    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:35.482211    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:35.494985    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:35:35.495068    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:35.506331    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:35:35.506423    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:35.517826    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:35:35.517919    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:35.529317    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:35:35.529391    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:35.540022    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:35:35.540088    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:35.550521    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:35:35.550588    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:35.560486    8653 logs.go:276] 0 containers: []
	W0731 12:35:35.560499    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:35.560571    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:35.575014    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:35:35.575030    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:35.575036    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:35.579441    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:35:35.579447    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:35:35.590521    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:35:35.590530    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:35:35.602165    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:35.602176    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:35.626290    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:35:35.626302    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:35:35.640650    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:35:35.640661    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:35:35.655047    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:35:35.655059    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:35:35.666438    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:35:35.666448    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:35:35.680677    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:35:35.680690    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:35.694045    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:35:35.694059    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:35:35.706288    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:35:35.706302    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:35:35.724544    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:35.724556    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:35.758428    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:35.758436    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:35.794384    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:35:35.794397    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:35:35.808492    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:35:35.808503    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:35:38.322522    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:43.324625    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:43.330189    8653 out.go:177] 
	W0731 12:35:43.334182    8653 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 12:35:43.334192    8653 out.go:239] * 
	* 
	W0731 12:35:43.334940    8653 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:35:43.348451    8653 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-992000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-31 12:35:43.437777 -0700 PDT m=+1307.245626501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-992000 -n running-upgrade-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-992000 -n running-upgrade-992000: exit status 2 (15.741353167s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-992000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-452000 sudo                                | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo                                | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo cat                            | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo cat                            | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo                                | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo                                | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo                                | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo cat                            | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo cat                            | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo                                | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo                                | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo                                | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo find                           | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-452000 sudo crio                           | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-452000                                     | cilium-452000             | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT | 31 Jul 24 12:25 PDT |
	| start   | -p kubernetes-upgrade-389000                         | kubernetes-upgrade-389000 | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-917000                             | offline-docker-917000     | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT | 31 Jul 24 12:25 PDT |
	| start   | -p stopped-upgrade-155000                            | minikube                  | jenkins | v1.26.0 | 31 Jul 24 12:25 PDT | 31 Jul 24 12:26 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-389000                         | kubernetes-upgrade-389000 | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT | 31 Jul 24 12:25 PDT |
	| start   | -p kubernetes-upgrade-389000                         | kubernetes-upgrade-389000 | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-389000                         | kubernetes-upgrade-389000 | jenkins | v1.33.1 | 31 Jul 24 12:25 PDT | 31 Jul 24 12:25 PDT |
	| start   | -p running-upgrade-992000                            | minikube                  | jenkins | v1.26.0 | 31 Jul 24 12:25 PDT | 31 Jul 24 12:26 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-155000 stop                          | minikube                  | jenkins | v1.26.0 | 31 Jul 24 12:26 PDT | 31 Jul 24 12:26 PDT |
	| start   | -p stopped-upgrade-155000                            | stopped-upgrade-155000    | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-992000                            | running-upgrade-992000    | jenkins | v1.33.1 | 31 Jul 24 12:26 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 12:26:53
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:26:53.882150    8653 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:26:53.882269    8653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:53.882274    8653 out.go:304] Setting ErrFile to fd 2...
	I0731 12:26:53.882276    8653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:53.882426    8653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:26:53.883499    8653 out.go:298] Setting JSON to false
	I0731 12:26:53.900375    8653 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5176,"bootTime":1722448837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:26:53.900469    8653 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:26:53.904944    8653 out.go:177] * [running-upgrade-992000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:26:53.912022    8653 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:26:53.912062    8653 notify.go:220] Checking for updates...
	I0731 12:26:53.918946    8653 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:26:53.920225    8653 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:26:53.922972    8653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:26:53.926022    8653 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:26:53.929022    8653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:26:53.932255    8653 config.go:182] Loaded profile config "running-upgrade-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:26:53.934964    8653 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:26:53.938025    8653 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:26:53.940996    8653 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:26:53.948052    8653 start.go:297] selected driver: qemu2
	I0731 12:26:53.948057    8653 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51329 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:26:53.948101    8653 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:26:53.950359    8653 cni.go:84] Creating CNI manager for ""
	I0731 12:26:53.950381    8653 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:26:53.950412    8653 start.go:340] cluster config:
	{Name:running-upgrade-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51329 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:26:53.950460    8653 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:53.958958    8653 out.go:177] * Starting "running-upgrade-992000" primary control-plane node in "running-upgrade-992000" cluster
	I0731 12:26:53.961913    8653 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:26:53.961924    8653 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 12:26:53.961932    8653 cache.go:56] Caching tarball of preloaded images
	I0731 12:26:53.961974    8653 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:26:53.961978    8653 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 12:26:53.962027    8653 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/config.json ...
	I0731 12:26:53.962337    8653 start.go:360] acquireMachinesLock for running-upgrade-992000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:27:06.346343    8653 start.go:364] duration metric: took 12.384411292s to acquireMachinesLock for "running-upgrade-992000"
	I0731 12:27:06.346386    8653 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:27:06.346395    8653 fix.go:54] fixHost starting: 
	I0731 12:27:06.347199    8653 fix.go:112] recreateIfNeeded on running-upgrade-992000: state=Running err=<nil>
	W0731 12:27:06.347210    8653 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:27:06.351367    8653 out.go:177] * Updating the running qemu2 "running-upgrade-992000" VM ...
	I0731 12:27:05.370841    8642 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/config.json ...
	I0731 12:27:05.371229    8642 machine.go:94] provisionDockerMachine start ...
	I0731 12:27:05.371376    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.371619    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.371627    8642 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 12:27:05.441874    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 12:27:05.441890    8642 buildroot.go:166] provisioning hostname "stopped-upgrade-155000"
	I0731 12:27:05.441958    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.442086    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.442094    8642 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-155000 && echo "stopped-upgrade-155000" | sudo tee /etc/hostname
	I0731 12:27:05.509632    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-155000
	
	I0731 12:27:05.509687    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.509807    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.509815    8642 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-155000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-155000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-155000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:27:05.575029    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:27:05.575043    8642 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19355-6494/.minikube CaCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19355-6494/.minikube}
	I0731 12:27:05.575054    8642 buildroot.go:174] setting up certificates
	I0731 12:27:05.575058    8642 provision.go:84] configureAuth start
	I0731 12:27:05.575066    8642 provision.go:143] copyHostCerts
	I0731 12:27:05.575159    8642 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem, removing ...
	I0731 12:27:05.575167    8642 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem
	I0731 12:27:05.575277    8642 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem (1082 bytes)
	I0731 12:27:05.575473    8642 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem, removing ...
	I0731 12:27:05.575478    8642 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem
	I0731 12:27:05.575532    8642 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem (1123 bytes)
	I0731 12:27:05.575640    8642 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem, removing ...
	I0731 12:27:05.575645    8642 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem
	I0731 12:27:05.575690    8642 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem (1675 bytes)
	I0731 12:27:05.575771    8642 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-155000 san=[127.0.0.1 localhost minikube stopped-upgrade-155000]
	I0731 12:27:05.652724    8642 provision.go:177] copyRemoteCerts
	I0731 12:27:05.652778    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:27:05.652787    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	I0731 12:27:05.688257    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:27:05.694772    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:27:05.701514    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 12:27:05.709029    8642 provision.go:87] duration metric: took 133.9695ms to configureAuth
	I0731 12:27:05.709041    8642 buildroot.go:189] setting minikube options for container-runtime
	I0731 12:27:05.709181    8642 config.go:182] Loaded profile config "stopped-upgrade-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:27:05.709215    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.709301    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.709307    8642 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 12:27:05.774864    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 12:27:05.774875    8642 buildroot.go:70] root file system type: tmpfs
	I0731 12:27:05.774933    8642 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 12:27:05.774991    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.775161    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.775204    8642 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 12:27:05.842812    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 12:27:05.842884    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.843012    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.843021    8642 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 12:27:06.231997    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 12:27:06.232010    8642 machine.go:97] duration metric: took 860.803ms to provisionDockerMachine
	I0731 12:27:06.232017    8642 start.go:293] postStartSetup for "stopped-upgrade-155000" (driver="qemu2")
	I0731 12:27:06.232025    8642 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:27:06.232137    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:27:06.232151    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	I0731 12:27:06.268394    8642 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:27:06.269813    8642 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 12:27:06.269821    8642 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-6494/.minikube/addons for local assets ...
	I0731 12:27:06.269910    8642 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-6494/.minikube/files for local assets ...
	I0731 12:27:06.270030    8642 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0731 12:27:06.270157    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:27:06.272964    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0731 12:27:06.280067    8642 start.go:296] duration metric: took 48.043041ms for postStartSetup
	I0731 12:27:06.280086    8642 fix.go:56] duration metric: took 21.329396584s for fixHost
	I0731 12:27:06.280150    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.280276    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:06.280282    8642 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 12:27:06.346263    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454026.563938463
	
	I0731 12:27:06.346273    8642 fix.go:216] guest clock: 1722454026.563938463
	I0731 12:27:06.346277    8642 fix.go:229] Guest: 2024-07-31 12:27:06.563938463 -0700 PDT Remote: 2024-07-31 12:27:06.280088 -0700 PDT m=+21.447041084 (delta=283.850463ms)
	I0731 12:27:06.346295    8642 fix.go:200] guest clock delta is within tolerance: 283.850463ms
	I0731 12:27:06.346297    8642 start.go:83] releasing machines lock for "stopped-upgrade-155000", held for 21.395618167s
	I0731 12:27:06.346378    8642 ssh_runner.go:195] Run: cat /version.json
	I0731 12:27:06.346387    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	I0731 12:27:06.346398    8642 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:27:06.346415    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	W0731 12:27:06.347145    8642 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51411->127.0.0.1:51225: write: broken pipe
	I0731 12:27:06.347163    8642 retry.go:31] will retry after 230.602924ms: ssh: handshake failed: write tcp 127.0.0.1:51411->127.0.0.1:51225: write: broken pipe
	W0731 12:27:06.610590    8642 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:27:06.610673    8642 ssh_runner.go:195] Run: systemctl --version
	I0731 12:27:06.612579    8642 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 12:27:06.614309    8642 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 12:27:06.614336    8642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 12:27:06.617222    8642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 12:27:06.622899    8642 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:27:06.622910    8642 start.go:495] detecting cgroup driver to use...
	I0731 12:27:06.623030    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:27:06.629951    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 12:27:06.633173    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 12:27:06.636479    8642 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 12:27:06.636507    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 12:27:06.639958    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:27:06.643683    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 12:27:06.646818    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:27:06.649751    8642 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:27:06.653181    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 12:27:06.656332    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 12:27:06.659299    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 12:27:06.662213    8642 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:27:06.665100    8642 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:27:06.668423    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:06.747317    8642 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 12:27:06.754486    8642 start.go:495] detecting cgroup driver to use...
	I0731 12:27:06.754554    8642 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 12:27:06.759995    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:27:06.768412    8642 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:27:06.774752    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:27:06.779285    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:27:06.783298    8642 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 12:27:06.827597    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:27:06.832681    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:27:06.838125    8642 ssh_runner.go:195] Run: which cri-dockerd
	I0731 12:27:06.839283    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 12:27:06.841640    8642 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 12:27:06.846681    8642 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 12:27:06.925906    8642 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 12:27:07.008884    8642 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 12:27:07.008956    8642 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 12:27:07.014533    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:07.091530    8642 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:27:08.220399    8642 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.128891709s)
	I0731 12:27:08.220465    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 12:27:08.224770    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:27:08.229043    8642 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 12:27:08.311674    8642 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 12:27:08.386850    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:08.450417    8642 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 12:27:08.455850    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:27:08.460171    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:08.540606    8642 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 12:27:08.580368    8642 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 12:27:08.580449    8642 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 12:27:08.583306    8642 start.go:563] Will wait 60s for crictl version
	I0731 12:27:08.583359    8642 ssh_runner.go:195] Run: which crictl
	I0731 12:27:08.585147    8642 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:27:08.601716    8642 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 12:27:08.601804    8642 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:27:08.619889    8642 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:27:06.361255    8653 machine.go:94] provisionDockerMachine start ...
	I0731 12:27:06.361319    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.361441    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.361445    8653 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 12:27:06.421914    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-992000
	
	I0731 12:27:06.421930    8653 buildroot.go:166] provisioning hostname "running-upgrade-992000"
	I0731 12:27:06.421975    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.422098    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.422103    8653 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-992000 && echo "running-upgrade-992000" | sudo tee /etc/hostname
	I0731 12:27:06.495256    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-992000
	
	I0731 12:27:06.495315    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.495454    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.495465    8653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-992000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-992000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-992000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:27:06.560763    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:27:06.560775    8653 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19355-6494/.minikube CaCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19355-6494/.minikube}
	I0731 12:27:06.560788    8653 buildroot.go:174] setting up certificates
	I0731 12:27:06.560793    8653 provision.go:84] configureAuth start
	I0731 12:27:06.560800    8653 provision.go:143] copyHostCerts
	I0731 12:27:06.560872    8653 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem, removing ...
	I0731 12:27:06.560880    8653 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem
	I0731 12:27:06.561003    8653 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem (1082 bytes)
	I0731 12:27:06.561194    8653 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem, removing ...
	I0731 12:27:06.561198    8653 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem
	I0731 12:27:06.561242    8653 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem (1123 bytes)
	I0731 12:27:06.561347    8653 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem, removing ...
	I0731 12:27:06.561350    8653 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem
	I0731 12:27:06.561391    8653 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem (1675 bytes)
	I0731 12:27:06.561479    8653 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-992000 san=[127.0.0.1 localhost minikube running-upgrade-992000]
	I0731 12:27:06.695327    8653 provision.go:177] copyRemoteCerts
	I0731 12:27:06.695360    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:27:06.695368    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	I0731 12:27:06.728419    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:27:06.735754    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:27:06.745449    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:27:06.752914    8653 provision.go:87] duration metric: took 192.119541ms to configureAuth
	I0731 12:27:06.752927    8653 buildroot.go:189] setting minikube options for container-runtime
	I0731 12:27:06.753055    8653 config.go:182] Loaded profile config "running-upgrade-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:27:06.753095    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.753191    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.753198    8653 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 12:27:06.815531    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 12:27:06.815541    8653 buildroot.go:70] root file system type: tmpfs
	I0731 12:27:06.815601    8653 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 12:27:06.815655    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.815768    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.815801    8653 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 12:27:06.881031    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 12:27:06.881087    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.881205    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.881214    8653 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 12:27:06.942598    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:27:06.942611    8653 machine.go:97] duration metric: took 581.369375ms to provisionDockerMachine
	I0731 12:27:06.942617    8653 start.go:293] postStartSetup for "running-upgrade-992000" (driver="qemu2")
	I0731 12:27:06.942624    8653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:27:06.942689    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:27:06.942698    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	I0731 12:27:06.975416    8653 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:27:06.976843    8653 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 12:27:06.976852    8653 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-6494/.minikube/addons for local assets ...
	I0731 12:27:06.976927    8653 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-6494/.minikube/files for local assets ...
	I0731 12:27:06.977012    8653 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0731 12:27:06.977102    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:27:06.980250    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0731 12:27:06.987888    8653 start.go:296] duration metric: took 45.266959ms for postStartSetup
	I0731 12:27:06.987905    8653 fix.go:56] duration metric: took 641.53525ms for fixHost
	I0731 12:27:06.987941    8653 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.988055    8653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104a32a10] 0x104a35270 <nil>  [] 0s} localhost 51265 <nil> <nil>}
	I0731 12:27:06.988060    8653 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 12:27:07.049589    8653 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454027.157936791
	
	I0731 12:27:07.049599    8653 fix.go:216] guest clock: 1722454027.157936791
	I0731 12:27:07.049603    8653 fix.go:229] Guest: 2024-07-31 12:27:07.157936791 -0700 PDT Remote: 2024-07-31 12:27:06.987907 -0700 PDT m=+13.127075001 (delta=170.029791ms)
	I0731 12:27:07.049615    8653 fix.go:200] guest clock delta is within tolerance: 170.029791ms
	I0731 12:27:07.049618    8653 start.go:83] releasing machines lock for "running-upgrade-992000", held for 703.279917ms
	I0731 12:27:07.049688    8653 ssh_runner.go:195] Run: cat /version.json
	I0731 12:27:07.049701    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	I0731 12:27:07.049688    8653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:27:07.049731    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	W0731 12:27:07.080551    8653 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:27:07.080623    8653 ssh_runner.go:195] Run: systemctl --version
	I0731 12:27:07.082584    8653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 12:27:07.084253    8653 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 12:27:07.084282    8653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 12:27:07.087631    8653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 12:27:07.092573    8653 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:27:07.092583    8653 start.go:495] detecting cgroup driver to use...
	I0731 12:27:07.092657    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:27:07.098586    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 12:27:07.101794    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 12:27:07.104978    8653 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 12:27:07.105008    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 12:27:07.108284    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:27:07.111572    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 12:27:07.114408    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:27:07.117150    8653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:27:07.120640    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 12:27:07.123481    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 12:27:07.126388    8653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 12:27:07.129224    8653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:27:07.132609    8653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:27:07.136156    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:07.254192    8653 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 12:27:07.265023    8653 start.go:495] detecting cgroup driver to use...
	I0731 12:27:07.265099    8653 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 12:27:07.275903    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:27:07.282643    8653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:27:07.293362    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:27:07.330348    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:27:07.335863    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:27:07.341790    8653 ssh_runner.go:195] Run: which cri-dockerd
	I0731 12:27:07.343119    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 12:27:07.345505    8653 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 12:27:07.350583    8653 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 12:27:07.456130    8653 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 12:27:07.573974    8653 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 12:27:07.574039    8653 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 12:27:07.580215    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:07.683035    8653 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:27:08.640985    8642 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 12:27:08.641053    8642 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 12:27:08.642256    8642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:27:08.645764    8642 kubeadm.go:883] updating cluster {Name:stopped-upgrade-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51260 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 12:27:08.645811    8642 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:27:08.645853    8642 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:27:08.660494    8642 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:27:08.660503    8642 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:27:08.660548    8642 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:27:08.664016    8642 ssh_runner.go:195] Run: which lz4
	I0731 12:27:08.665247    8642 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 12:27:08.666488    8642 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 12:27:08.666499    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 12:27:09.653303    8642 docker.go:649] duration metric: took 988.116708ms to copy over tarball
	I0731 12:27:09.653363    8642 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 12:27:10.865967    8642 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.2126325s)
	I0731 12:27:10.865980    8642 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 12:27:10.882001    8642 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:27:10.885148    8642 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 12:27:10.890498    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:10.974508    8642 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:27:12.270803    8642 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.296309208s)
	I0731 12:27:12.270891    8642 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:27:12.289906    8642 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:27:12.289915    8642 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:27:12.289920    8642 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 12:27:12.294606    8642 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:12.296542    8642 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:12.298552    8642 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:12.298884    8642 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:27:12.300399    8642 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:12.300772    8642 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:12.302119    8642 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:12.302526    8642 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:27:12.303686    8642 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:12.303768    8642 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:12.305222    8642 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:12.305325    8642 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:12.306290    8642 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:12.306318    8642 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:12.307719    8642 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:12.308236    8642 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:12.683377    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 12:27:12.694305    8642 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 12:27:12.694343    8642 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 12:27:12.694407    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 12:27:12.703964    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:27:12.704074    8642 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 12:27:12.706408    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:12.706527    8642 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 12:27:12.706537    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 12:27:12.710723    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:12.714434    8642 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 12:27:12.714442    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 12:27:12.726111    8642 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 12:27:12.726131    8642 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:12.726187    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:12.748983    8642 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 12:27:12.749005    8642 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:12.749063    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0731 12:27:12.753027    8642 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:27:12.753271    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:12.753904    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:12.763790    8642 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 12:27:12.763838    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 12:27:12.767724    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:27:12.767886    8642 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:27:12.768612    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:12.783832    8642 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 12:27:12.783841    8642 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 12:27:12.783856    8642 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 12:27:12.783856    8642 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:12.783882    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 12:27:12.783900    8642 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:12.783944    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:12.783948    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:12.795297    8642 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 12:27:12.795316    8642 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:12.795374    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:12.822526    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:12.824897    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 12:27:12.824932    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:27:12.824937    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 12:27:12.825043    8642 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:27:12.880045    8642 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 12:27:12.880052    8642 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 12:27:12.880063    8642 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:12.880085    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 12:27:12.880123    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:12.913304    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0731 12:27:12.961614    8642 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:27:12.961829    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:12.983502    8642 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:27:12.983518    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 12:27:12.997596    8642 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 12:27:12.997621    8642 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:12.997682    8642 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:13.103894    8642 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 12:27:13.103932    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 12:27:13.104053    8642 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:27:13.108398    8642 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 12:27:13.108430    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 12:27:13.109131    8642 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:27:13.109138    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 12:27:13.301305    8642 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 12:27:13.301327    8642 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:27:13.301336    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 12:27:13.532995    8642 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 12:27:13.533036    8642 cache_images.go:92] duration metric: took 1.243151125s to LoadCachedImages
	W0731 12:27:13.533078    8642 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0731 12:27:13.533084    8642 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 12:27:13.533146    8642 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-155000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 12:27:13.533220    8642 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 12:27:13.546946    8642 cni.go:84] Creating CNI manager for ""
	I0731 12:27:13.546959    8642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:27:13.546965    8642 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 12:27:13.546974    8642 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-155000 NodeName:stopped-upgrade-155000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:27:13.547062    8642 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-155000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:27:13.547119    8642 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 12:27:13.550106    8642 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:27:13.550139    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:27:13.552699    8642 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 12:27:13.557696    8642 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:27:13.562601    8642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 12:27:13.567933    8642 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 12:27:13.569353    8642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:27:13.572800    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:13.646887    8642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:27:13.655181    8642 certs.go:68] Setting up /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000 for IP: 10.0.2.15
	I0731 12:27:13.655191    8642 certs.go:194] generating shared ca certs ...
	I0731 12:27:13.655199    8642 certs.go:226] acquiring lock for ca certs: {Name:mkb4a8d4a0878ee693c06454a581729087f6c431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:13.655472    8642 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.key
	I0731 12:27:13.655524    8642 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.key
	I0731 12:27:13.655529    8642 certs.go:256] generating profile certs ...
	I0731 12:27:13.655607    8642 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/client.key
	I0731 12:27:13.655621    8642 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key.4262e312
	I0731 12:27:13.655632    8642 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt.4262e312 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 12:27:13.719761    8642 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt.4262e312 ...
	I0731 12:27:13.719774    8642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt.4262e312: {Name:mk8adc6a1fe5d527ecfd0ab44b37d59a5c29a31b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:13.720098    8642 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key.4262e312 ...
	I0731 12:27:13.720103    8642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key.4262e312: {Name:mk84174f961cf12d935f29b3e4833b564d681dcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:13.720239    8642 certs.go:381] copying /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt.4262e312 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt
	I0731 12:27:13.720360    8642 certs.go:385] copying /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key.4262e312 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key
	I0731 12:27:13.720512    8642 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/proxy-client.key
	I0731 12:27:13.720643    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986.pem (1338 bytes)
	W0731 12:27:13.720678    8642 certs.go:480] ignoring /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0731 12:27:13.720684    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 12:27:13.720707    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem (1082 bytes)
	I0731 12:27:13.720731    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:27:13.720748    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem (1675 bytes)
	I0731 12:27:13.720787    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0731 12:27:13.721138    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:27:13.728779    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:27:13.736084    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:27:13.742594    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 12:27:13.748948    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 12:27:13.755968    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 12:27:13.762747    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:27:13.769506    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 12:27:13.776712    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0731 12:27:13.784557    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0731 12:27:13.791822    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:27:13.799519    8642 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:27:13.805464    8642 ssh_runner.go:195] Run: openssl version
	I0731 12:27:13.807596    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:27:13.810672    8642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:13.812151    8642 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:13.812174    8642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:13.813975    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:27:13.816998    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0731 12:27:13.820406    8642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0731 12:27:13.822000    8642 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:15 /usr/share/ca-certificates/6986.pem
	I0731 12:27:13.822030    8642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0731 12:27:13.823849    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0731 12:27:13.826996    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0731 12:27:13.829901    8642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0731 12:27:13.831422    8642 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:15 /usr/share/ca-certificates/69862.pem
	I0731 12:27:13.831449    8642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0731 12:27:13.833308    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:27:13.837050    8642 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 12:27:13.838705    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:27:13.840805    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:27:13.842821    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:27:13.844871    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:27:13.846887    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:27:13.848881    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:27:13.850992    8642 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51260 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:27:13.851073    8642 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:27:13.863863    8642 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:27:13.867162    8642 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 12:27:13.867170    8642 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 12:27:13.867205    8642 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 12:27:13.870198    8642 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:13.870243    8642 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-155000" does not appear in /Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:27:13.870257    8642 kubeconfig.go:62] /Users/jenkins/minikube-integration/19355-6494/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-155000" cluster setting kubeconfig missing "stopped-upgrade-155000" context setting]
	I0731 12:27:13.870435    8642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/kubeconfig: {Name:mk9ca083b2af3b573139e6661162870891ccebe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:13.871040    8642 kapi.go:59] client config for stopped-upgrade-155000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10227c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:27:13.871924    8642 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 12:27:13.875229    8642 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-155000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 12:27:13.875235    8642 kubeadm.go:1160] stopping kube-system containers ...
	I0731 12:27:13.875284    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:27:13.886862    8642 docker.go:483] Stopping containers: [844d08449ca3 2c107ecdfbc5 8d7ed7e39f35 de06d8fe8794 17f9db5ab999 acc1c033fa0f f47d526e0a42 e15e07a5136c]
	I0731 12:27:13.886929    8642 ssh_runner.go:195] Run: docker stop 844d08449ca3 2c107ecdfbc5 8d7ed7e39f35 de06d8fe8794 17f9db5ab999 acc1c033fa0f f47d526e0a42 e15e07a5136c
	I0731 12:27:13.898510    8642 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 12:27:13.904366    8642 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:27:13.907765    8642 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:27:13.907774    8642 kubeadm.go:157] found existing configuration files:
	
	I0731 12:27:13.907812    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/admin.conf
	I0731 12:27:13.910589    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:27:13.910635    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:27:13.913551    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/kubelet.conf
	I0731 12:27:13.916592    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:27:13.916648    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:27:13.920365    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/controller-manager.conf
	I0731 12:27:13.924000    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:27:13.924053    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:27:13.927343    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/scheduler.conf
	I0731 12:27:13.930514    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:27:13.930567    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:27:13.933469    8642 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:27:13.936877    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:13.962929    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:14.306434    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:14.434777    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:14.455842    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:14.478516    8642 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:27:14.478589    8642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:14.980833    8642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:15.480638    8642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:15.484663    8642 api_server.go:72] duration metric: took 1.006182875s to wait for apiserver process to appear ...
	I0731 12:27:15.484672    8642 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:27:15.484682    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:24.196025    8653 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.513533041s)
	I0731 12:27:24.196085    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 12:27:24.200932    8653 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0731 12:27:24.207757    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:27:24.212586    8653 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 12:27:24.299484    8653 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 12:27:24.383781    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:24.476060    8653 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 12:27:24.482532    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:27:24.487190    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:24.573976    8653 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 12:27:24.611740    8653 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 12:27:24.611831    8653 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 12:27:24.614093    8653 start.go:563] Will wait 60s for crictl version
	I0731 12:27:24.614151    8653 ssh_runner.go:195] Run: which crictl
	I0731 12:27:24.615554    8653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:27:24.627488    8653 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 12:27:24.627567    8653 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:27:24.640457    8653 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:27:20.486745    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:20.486829    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:24.664228    8653 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 12:27:24.664353    8653 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 12:27:24.665967    8653 kubeadm.go:883] updating cluster {Name:running-upgrade-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51329 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 12:27:24.666011    8653 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:27:24.666052    8653 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:27:24.676295    8653 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:27:24.676304    8653 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:27:24.676353    8653 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:27:24.680016    8653 ssh_runner.go:195] Run: which lz4
	I0731 12:27:24.681466    8653 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 12:27:24.682661    8653 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 12:27:24.682671    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 12:27:25.742051    8653 docker.go:649] duration metric: took 1.060651084s to copy over tarball
	I0731 12:27:25.742114    8653 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 12:27:26.836932    8653 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.094842708s)
	I0731 12:27:26.836945    8653 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 12:27:26.852844    8653 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:27:26.856447    8653 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 12:27:26.861200    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:26.938995    8653 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:27:28.160512    8653 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.221542375s)
	I0731 12:27:28.160603    8653 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:27:28.177628    8653 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:27:28.177637    8653 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:27:28.177672    8653 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 12:27:28.184119    8653 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:28.186055    8653 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:28.187164    8653 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:28.187192    8653 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:28.188625    8653 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:28.188738    8653 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:27:28.190169    8653 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:28.190330    8653 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:28.192054    8653 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:28.192158    8653 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:27:28.193609    8653 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:28.193643    8653 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:28.195105    8653 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:28.195176    8653 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:28.195715    8653 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:28.197084    8653 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:28.509963    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:28.520616    8653 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 12:27:28.520648    8653 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:28.520701    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:28.533520    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 12:27:28.540732    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 12:27:28.551707    8653 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 12:27:28.551730    8653 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 12:27:28.551776    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 12:27:28.562816    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:27:28.562910    8653 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 12:27:28.564970    8653 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 12:27:28.564982    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0731 12:27:28.566431    8653 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:27:28.566542    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:28.577633    8653 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 12:27:28.577650    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 12:27:28.581312    8653 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 12:27:28.581332    8653 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:28.581393    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:28.585457    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:28.622230    8653 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 12:27:28.622265    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:27:28.622271    8653 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 12:27:28.622289    8653 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:28.622332    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:28.622366    8653 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:27:28.624160    8653 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 12:27:28.624175    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 12:27:28.632170    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:28.641211    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 12:27:28.661039    8653 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 12:27:28.661062    8653 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:28.661127    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:28.691751    8653 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:27:28.691766    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 12:27:28.693959    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 12:27:28.695923    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:28.698287    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:28.743374    8653 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 12:27:28.743424    8653 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 12:27:28.743441    8653 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:28.743449    8653 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 12:27:28.743458    8653 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:28.743498    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:28.743498    8653 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:28.760800    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:27:28.760800    8653 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 12:27:28.760911    8653 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:27:28.762532    8653 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 12:27:28.762543    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W0731 12:27:28.857205    8653 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:27:28.857312    8653 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:25.487587    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:25.487606    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:28.891802    8653 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 12:27:28.891825    8653 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:28.891881    8653 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:29.013832    8653 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:27:29.013846    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 12:27:29.151143    8653 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 12:27:29.151185    8653 cache_images.go:92] duration metric: took 973.540667ms to LoadCachedImages
	W0731 12:27:29.151229    8653 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0731 12:27:29.151235    8653 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 12:27:29.151290    8653 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-992000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 12:27:29.151347    8653 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 12:27:29.168864    8653 cni.go:84] Creating CNI manager for ""
	I0731 12:27:29.168874    8653 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:27:29.168879    8653 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 12:27:29.168887    8653 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-992000 NodeName:running-upgrade-992000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:27:29.168948    8653 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-992000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:27:29.169003    8653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 12:27:29.172533    8653 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:27:29.172568    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:27:29.175448    8653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 12:27:29.180863    8653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:27:29.186196    8653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 12:27:29.192139    8653 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 12:27:29.193617    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:29.274127    8653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:27:29.279948    8653 certs.go:68] Setting up /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000 for IP: 10.0.2.15
	I0731 12:27:29.279956    8653 certs.go:194] generating shared ca certs ...
	I0731 12:27:29.279965    8653 certs.go:226] acquiring lock for ca certs: {Name:mkb4a8d4a0878ee693c06454a581729087f6c431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:29.280109    8653 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.key
	I0731 12:27:29.280146    8653 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.key
	I0731 12:27:29.280151    8653 certs.go:256] generating profile certs ...
	I0731 12:27:29.280209    8653 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/client.key
	I0731 12:27:29.280230    8653 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key.9acf5db4
	I0731 12:27:29.280240    8653 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt.9acf5db4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 12:27:29.400509    8653 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt.9acf5db4 ...
	I0731 12:27:29.400524    8653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt.9acf5db4: {Name:mkc1c39bac0a72486d4fe6d3978dc041d6066cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:29.401067    8653 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key.9acf5db4 ...
	I0731 12:27:29.401076    8653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key.9acf5db4: {Name:mk954b0da873280871a3f0f1aa7cbf167e33245a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:29.401252    8653 certs.go:381] copying /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt.9acf5db4 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt
	I0731 12:27:29.401387    8653 certs.go:385] copying /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key.9acf5db4 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key
	I0731 12:27:29.401542    8653 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/proxy-client.key
	I0731 12:27:29.401671    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986.pem (1338 bytes)
	W0731 12:27:29.401693    8653 certs.go:480] ignoring /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0731 12:27:29.401698    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 12:27:29.401716    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem (1082 bytes)
	I0731 12:27:29.401739    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:27:29.401757    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem (1675 bytes)
	I0731 12:27:29.401796    8653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0731 12:27:29.402107    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:27:29.410150    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:27:29.417974    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:27:29.425503    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 12:27:29.432140    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 12:27:29.438852    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 12:27:29.446264    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:27:29.454131    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 12:27:29.461231    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0731 12:27:29.468137    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:27:29.474574    8653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0731 12:27:29.481957    8653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:27:29.487173    8653 ssh_runner.go:195] Run: openssl version
	I0731 12:27:29.489389    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0731 12:27:29.492494    8653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0731 12:27:29.494035    8653 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:15 /usr/share/ca-certificates/69862.pem
	I0731 12:27:29.494057    8653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0731 12:27:29.496109    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:27:29.498917    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:27:29.502535    8653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:29.504222    8653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:29.504241    8653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:29.505942    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:27:29.509359    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0731 12:27:29.512568    8653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0731 12:27:29.513995    8653 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:15 /usr/share/ca-certificates/6986.pem
	I0731 12:27:29.514013    8653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0731 12:27:29.515847    8653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0731 12:27:29.518878    8653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 12:27:29.520609    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:27:29.522834    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:27:29.524825    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:27:29.526830    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:27:29.529184    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:27:29.530994    8653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:27:29.532886    8653 kubeadm.go:392] StartCluster: {Name:running-upgrade-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51329 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-992000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:27:29.532955    8653 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:27:29.543493    8653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:27:29.547052    8653 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 12:27:29.547058    8653 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 12:27:29.547084    8653 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 12:27:29.550744    8653 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:29.551082    8653 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-992000" does not appear in /Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:27:29.551191    8653 kubeconfig.go:62] /Users/jenkins/minikube-integration/19355-6494/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-992000" cluster setting kubeconfig missing "running-upgrade-992000" context setting]
	I0731 12:27:29.551400    8653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/kubeconfig: {Name:mk9ca083b2af3b573139e6661162870891ccebe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:29.551891    8653 kapi.go:59] client config for running-upgrade-992000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105dc81b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:27:29.552243    8653 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 12:27:29.555647    8653 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-992000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 12:27:29.555661    8653 kubeadm.go:1160] stopping kube-system containers ...
	I0731 12:27:29.555784    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:27:29.567749    8653 docker.go:483] Stopping containers: [1aba6908a33d d6e907091ea2 bed6d227dd72 78c6135bb2fd c5e97c00f1d3 e237a7c0d577 8d493cbead92 4f33093eb2c6 aa920439f439 1a684af57f21 f88a399a353f 953748af082f e02cec385629 41559b97a2f2 328fb38704be da726d200e7f dea099f0915a 785fe08d3a18 3f98821dfb26 c4e0d211ab07]
	I0731 12:27:29.567807    8653 ssh_runner.go:195] Run: docker stop 1aba6908a33d d6e907091ea2 bed6d227dd72 78c6135bb2fd c5e97c00f1d3 e237a7c0d577 8d493cbead92 4f33093eb2c6 aa920439f439 1a684af57f21 f88a399a353f 953748af082f e02cec385629 41559b97a2f2 328fb38704be da726d200e7f dea099f0915a 785fe08d3a18 3f98821dfb26 c4e0d211ab07
	I0731 12:27:29.579842    8653 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 12:27:29.680555    8653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:27:29.684822    8653 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jul 31 19:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 31 19:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 31 19:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 31 19:26 /etc/kubernetes/scheduler.conf
	
	I0731 12:27:29.684862    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/admin.conf
	I0731 12:27:29.688287    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:29.688320    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:27:29.691541    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/kubelet.conf
	I0731 12:27:29.694995    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:29.695028    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:27:29.698379    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/controller-manager.conf
	I0731 12:27:29.701445    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:29.701467    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:27:29.704243    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/scheduler.conf
	I0731 12:27:29.706863    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:29.706881    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:27:29.709978    8653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:27:29.712857    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:29.745023    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:30.345967    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:30.595401    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:30.624107    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:30.653311    8653 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:27:30.653385    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:31.155725    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:31.655759    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:32.153970    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:32.654082    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:32.658674    8653 api_server.go:72] duration metric: took 2.005431s to wait for apiserver process to appear ...
	I0731 12:27:32.658682    8653 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:27:32.658691    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:30.487948    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:30.487969    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:37.660674    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:37.660718    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:35.489005    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:35.489053    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:42.661402    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:42.661482    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:40.490129    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:40.490197    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:47.662186    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:47.662209    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:45.491725    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:45.491790    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:52.662864    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:52.662963    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:50.493525    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:50.493587    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:57.664138    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:57.664185    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:55.495989    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:55.496039    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:02.665343    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:02.665391    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:00.497787    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:00.497827    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:07.666947    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:07.666985    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:05.499935    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:05.499974    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:12.669107    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:12.669178    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:10.500703    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:10.500788    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:17.671267    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:17.671302    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:15.503192    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:15.503323    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:15.515992    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:28:15.516079    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:15.528064    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:28:15.528130    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:15.537848    8642 logs.go:276] 0 containers: []
	W0731 12:28:15.537863    8642 logs.go:278] No container was found matching "coredns"
	I0731 12:28:15.537938    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:15.548904    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:28:15.548973    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:15.558535    8642 logs.go:276] 0 containers: []
	W0731 12:28:15.558547    8642 logs.go:278] No container was found matching "kube-proxy"
	I0731 12:28:15.558607    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:15.569253    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:28:15.569322    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:15.579547    8642 logs.go:276] 0 containers: []
	W0731 12:28:15.579557    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:15.579611    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:15.589998    8642 logs.go:276] 0 containers: []
	W0731 12:28:15.590010    8642 logs.go:278] No container was found matching "storage-provisioner"
	I0731 12:28:15.590016    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:15.590022    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:28:15.617826    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:28:15.617835    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:28:15.635218    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:28:15.635232    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:28:15.657951    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:15.657967    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:15.682835    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:28:15.682848    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:15.694521    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:28:15.694534    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:28:15.712755    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:15.712766    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:15.717004    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:15.717014    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:15.826748    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:28:15.826764    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:28:15.840921    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:28:15.840934    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:28:15.853877    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:28:15.853890    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:28:15.867363    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:28:15.867372    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:28:15.882555    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:28:15.882566    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:28:18.402662    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:22.673375    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:22.673447    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:23.403138    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:23.403392    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:23.431377    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:28:23.431482    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:23.448435    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:28:23.448529    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:23.460880    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:28:23.460953    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:23.471522    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:28:23.471602    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:23.481534    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:28:23.481609    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:23.492335    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:28:23.492404    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:23.503286    8642 logs.go:276] 0 containers: []
	W0731 12:28:23.503297    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:23.503357    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:23.514379    8642 logs.go:276] 1 containers: [9c98bd426c62]
	I0731 12:28:23.514395    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:28:23.514407    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:23.526389    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:28:23.526400    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:28:23.543677    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:28:23.543686    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:28:23.556109    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:28:23.556124    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:28:23.567721    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:28:23.567732    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:28:23.585723    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:28:23.585734    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:28:23.606931    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:28:23.606941    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:28:23.621255    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:28:23.621265    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:28:23.643045    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:28:23.643055    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:28:23.654815    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:23.654824    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:23.680631    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:23.680639    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:28:23.706652    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:23.706746    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:23.707810    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:23.707818    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:23.752611    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:28:23.752624    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:28:23.766902    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:23.766913    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:23.771669    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:28:23.771675    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:28:23.786966    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:28:23.786979    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:28:23.798530    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:23.798539    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:28:23.798574    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:28:23.798580    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:23.798587    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:23.798591    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:23.798594    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:28:27.675898    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:27.675978    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:32.678563    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:32.678923    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:32.708360    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:28:32.708496    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:32.727315    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:28:32.727434    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:32.747223    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:28:32.747303    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:32.758424    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:28:32.758498    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:32.769175    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:28:32.769246    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:32.780229    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:28:32.780305    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:32.791948    8653 logs.go:276] 0 containers: []
	W0731 12:28:32.791959    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:32.792020    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:32.802760    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:28:32.802777    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:28:32.802782    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:28:32.814143    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:28:32.814153    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:28:32.828021    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:28:32.828036    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:28:32.843188    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:28:32.843200    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:28:32.854314    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:28:32.854324    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:28:32.865716    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:28:32.865727    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:28:32.878880    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:28:32.878891    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:28:32.895740    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:28:32.895752    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:28:32.910997    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:32.911007    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:32.938389    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:32.938395    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:32.942918    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:28:32.942927    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:28:32.956675    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:28:32.956691    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:28:32.969603    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:28:32.969614    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:28:32.981771    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:28:32.981781    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:32.993861    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:28:32.993873    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:28:33.005059    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:33.005070    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:28:33.046978    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:33.046986    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:33.128228    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:28:33.128240    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:28:33.142285    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:28:33.142296    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:28:33.802479    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:35.657478    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:38.804929    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:38.805173    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:38.823190    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:28:38.823280    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:38.846924    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:28:38.847012    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:38.859662    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:28:38.859742    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:38.870006    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:28:38.870078    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:38.880237    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:28:38.880311    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:38.890867    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:28:38.890937    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:38.901126    8642 logs.go:276] 0 containers: []
	W0731 12:28:38.901136    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:38.901198    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:38.911600    8642 logs.go:276] 1 containers: [9c98bd426c62]
	I0731 12:28:38.911616    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:28:38.911621    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:28:38.931233    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:28:38.931247    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:28:38.942906    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:28:38.942920    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:28:38.963481    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:28:38.963493    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:28:38.976998    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:28:38.977012    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:28:38.991516    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:28:38.991525    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:28:39.019462    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:28:39.019474    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:28:39.033545    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:28:39.033556    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:39.045252    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:39.045262    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:39.080653    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:28:39.080663    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:28:39.094309    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:28:39.094318    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:28:39.106350    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:28:39.106361    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:28:39.116956    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:39.116969    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:28:39.142444    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:39.142542    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:39.143645    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:39.143651    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:39.147818    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:28:39.147826    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:28:39.165626    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:39.165639    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:39.189684    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:39.189692    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:28:39.189719    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:28:39.189723    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:39.189728    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:39.189732    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:39.189735    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:28:40.658535    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:40.659004    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:40.689151    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:28:40.689285    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:40.708137    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:28:40.708245    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:40.724180    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:28:40.724289    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:40.744924    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:28:40.744999    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:40.755237    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:28:40.755307    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:40.766349    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:28:40.766419    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:40.776641    8653 logs.go:276] 0 containers: []
	W0731 12:28:40.776653    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:40.776721    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:40.787442    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:28:40.787458    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:28:40.787465    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:40.800112    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:28:40.800123    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:28:40.818522    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:28:40.818533    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:28:40.831234    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:28:40.831247    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:28:40.843679    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:28:40.843690    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:28:40.859672    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:40.859682    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:40.886958    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:28:40.886968    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:28:40.898483    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:28:40.898495    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:28:40.910105    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:28:40.910115    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:28:40.927913    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:28:40.927923    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:28:40.939097    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:40.939107    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:28:40.981367    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:40.981376    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:40.985959    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:40.985968    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:41.021990    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:28:41.022001    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:28:41.036018    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:28:41.036028    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:28:41.050143    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:28:41.050154    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:28:41.063989    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:28:41.064006    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:28:41.078577    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:28:41.078591    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:28:41.090123    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:28:41.090135    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:28:43.603736    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:48.605938    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:48.606043    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:48.617058    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:28:48.617134    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:48.628341    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:28:48.628415    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:48.639959    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:28:48.640034    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:48.650679    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:28:48.650747    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:48.661558    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:28:48.661629    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:48.672777    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:28:48.672855    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:48.683313    8653 logs.go:276] 0 containers: []
	W0731 12:28:48.683323    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:48.683377    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:48.694413    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:28:48.694430    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:28:48.694436    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:28:48.706033    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:28:48.706048    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:28:48.724132    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:28:48.724146    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:28:48.736658    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:28:48.736671    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:28:48.748287    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:28:48.748301    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:28:48.759562    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:48.759573    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:48.787212    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:48.787220    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:48.821749    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:28:48.821760    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:28:48.835529    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:28:48.835539    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:28:48.850916    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:28:48.850926    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:28:48.862642    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:28:48.862655    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:28:48.877856    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:28:48.877873    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:28:49.193521    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:48.893317    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:28:48.893330    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:28:48.905390    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:28:48.905400    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:48.917656    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:28:48.917672    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:28:48.933724    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:48.933734    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:28:48.973961    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:48.973969    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:48.978615    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:28:48.978626    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:28:48.993026    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:28:48.993036    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:28:51.506252    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:54.195650    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:54.195942    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:54.223716    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:28:54.223846    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:54.241359    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:28:54.241456    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:54.254897    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:28:54.254984    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:54.266693    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:28:54.266776    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:54.277491    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:28:54.277570    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:54.289971    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:28:54.290049    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:54.300463    8642 logs.go:276] 0 containers: []
	W0731 12:28:54.300473    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:54.300532    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:54.311045    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:28:54.311063    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:54.311069    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:54.335843    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:54.335853    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:28:54.360954    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:54.361047    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:54.362154    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:28:54.362158    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:28:54.373844    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:28:54.373855    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:28:54.395714    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:28:54.395726    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:28:54.410457    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:54.410467    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:54.452215    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:28:54.452231    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:28:54.465100    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:28:54.465110    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:28:54.476277    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:28:54.476288    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:28:54.487347    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:28:54.487358    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:54.499246    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:54.499257    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:54.503676    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:28:54.503687    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:28:54.527192    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:28:54.527203    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:28:54.540154    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:28:54.540165    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:28:54.557273    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:28:54.557282    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:28:54.576440    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:28:54.576452    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:28:54.590261    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:28:54.590274    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:28:54.604487    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:54.604497    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:28:54.604519    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:28:54.604524    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:54.604527    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:54.604531    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:54.604534    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:28:56.508842    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:56.509165    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:56.538237    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:28:56.538358    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:56.555564    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:28:56.555655    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:56.569514    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:28:56.569586    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:56.581448    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:28:56.581523    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:56.592285    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:28:56.592360    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:56.603075    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:28:56.603146    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:56.618118    8653 logs.go:276] 0 containers: []
	W0731 12:28:56.618129    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:56.618191    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:56.628739    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:28:56.628757    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:28:56.628763    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:28:56.643137    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:28:56.643150    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:28:56.654606    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:28:56.654617    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:28:56.673000    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:28:56.673011    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:28:56.685103    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:28:56.685116    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:28:56.697026    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:56.697038    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:56.701686    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:56.701693    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:56.727312    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:28:56.727326    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:56.739714    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:28:56.739724    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:28:56.750753    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:28:56.750765    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:28:56.762335    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:28:56.762349    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:28:56.773861    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:56.773874    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:56.811225    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:28:56.811236    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:28:56.828563    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:28:56.828574    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:28:56.841119    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:28:56.841130    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:28:56.860740    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:28:56.860750    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:28:56.874932    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:28:56.874946    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:28:56.886474    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:28:56.886486    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:28:56.898052    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:56.898065    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:28:59.439955    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:04.608171    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:04.442468    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:04.442733    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:04.473912    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:04.474043    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:04.493045    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:04.493129    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:04.506925    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:04.507021    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:04.518964    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:04.519029    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:04.529862    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:04.529938    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:04.541045    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:04.541119    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:04.551251    8653 logs.go:276] 0 containers: []
	W0731 12:29:04.551265    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:04.551324    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:04.562293    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:04.562306    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:04.562313    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:04.574625    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:04.574636    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:04.593351    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:04.593362    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:04.605438    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:04.605448    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:04.632611    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:04.632620    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:04.675089    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:04.675101    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:04.688872    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:04.688883    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:04.703768    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:04.703778    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:04.715830    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:04.715842    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:04.732537    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:04.732552    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:04.744493    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:04.744505    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:04.755720    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:04.755730    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:04.767222    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:04.767233    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:04.778799    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:04.778812    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:04.791223    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:04.791235    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:04.804645    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:04.804659    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:04.809254    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:04.809263    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:04.846905    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:04.846920    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:04.862568    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:04.862580    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:07.375834    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:09.610247    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:09.610512    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:09.637084    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:29:09.637217    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:09.654543    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:29:09.654637    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:09.668057    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:29:09.668137    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:09.685678    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:29:09.685758    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:09.696246    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:29:09.696317    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:09.710141    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:29:09.710215    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:09.720701    8642 logs.go:276] 0 containers: []
	W0731 12:29:09.720713    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:09.720776    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:09.732152    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:29:09.732174    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:29:09.732179    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:29:09.750107    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:29:09.750117    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:29:09.761999    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:29:09.762009    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:29:09.773728    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:09.773739    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:29:09.798936    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:09.799033    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:09.800170    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:09.800178    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:09.836797    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:29:09.836813    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:29:09.850197    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:29:09.850205    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:29:12.378337    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:12.378604    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:12.401976    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:12.402106    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:12.417655    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:12.417737    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:12.430540    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:12.430623    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:12.441518    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:12.441599    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:12.451469    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:12.451535    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:12.462043    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:12.462116    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:12.472368    8653 logs.go:276] 0 containers: []
	W0731 12:29:12.472380    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:12.472438    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:12.482984    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:12.483001    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:12.483007    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:12.496693    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:12.496704    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:12.510428    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:12.510440    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:12.521778    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:12.521790    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:12.533224    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:12.533234    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:12.551141    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:12.551154    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:12.562795    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:12.562810    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:12.604349    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:12.604357    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:12.640840    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:12.640850    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:12.645614    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:12.645621    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:12.671908    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:12.671920    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:12.683272    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:12.683283    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:12.695994    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:12.696005    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:12.708577    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:12.708587    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:12.720502    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:12.720512    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:12.732898    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:12.732908    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:12.743970    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:12.743981    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:12.759659    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:12.759670    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:12.774159    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:12.774170    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:09.864587    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:29:09.864602    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:29:09.876337    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:09.876346    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:09.901251    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:09.901259    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:09.905112    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:29:09.905120    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:29:09.922466    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:29:09.922479    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:29:09.935295    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:29:09.935304    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:29:09.949386    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:29:09.949397    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:29:09.963804    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:29:09.963816    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:29:09.975644    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:29:09.975655    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:09.988116    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:29:09.988130    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:29:10.010919    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:10.010929    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:29:10.010955    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:29:10.010959    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:10.010963    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:10.010967    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:10.010970    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:15.287458    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:20.289593    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:20.289729    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:20.301848    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:20.301930    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:20.312122    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:20.312195    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:20.322594    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:20.322667    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:20.333340    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:20.333411    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:20.344154    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:20.344220    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:20.355114    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:20.355189    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:20.365831    8653 logs.go:276] 0 containers: []
	W0731 12:29:20.365844    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:20.365902    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:20.380951    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:20.380968    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:20.380973    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:20.394987    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:20.395001    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:20.408526    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:20.408539    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:20.419979    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:20.419993    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:20.431079    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:20.431093    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:20.465970    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:20.465982    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:20.479266    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:20.479276    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:20.491739    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:20.491749    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:20.502750    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:20.502761    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:20.528275    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:20.528284    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:20.540719    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:20.540733    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:20.552567    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:20.552579    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:20.570110    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:20.570120    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:20.588345    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:20.588356    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:20.600048    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:20.600059    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:20.611040    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:20.611051    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:20.622262    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:20.622273    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:20.663585    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:20.663599    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:20.668341    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:20.668350    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:23.184888    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:20.014787    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:28.187198    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:28.187617    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:28.239015    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:28.239152    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:28.261083    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:28.261175    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:28.275914    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:28.276002    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:28.287472    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:28.287545    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:28.298633    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:28.298714    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:28.309416    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:28.309497    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:28.319969    8653 logs.go:276] 0 containers: []
	W0731 12:29:28.319981    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:28.320041    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:28.330825    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:28.330840    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:28.330845    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:28.343779    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:28.343789    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:28.357501    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:28.357510    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:28.369608    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:28.369619    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:28.382319    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:28.382330    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:28.393870    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:28.393883    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:28.411889    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:28.411903    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:28.416447    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:28.416454    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:28.430558    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:28.430569    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:28.441959    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:28.441970    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:28.454444    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:28.454455    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:28.465572    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:28.465583    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:28.476693    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:28.476705    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:28.489825    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:28.489835    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:28.529163    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:28.529175    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:28.565945    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:28.565960    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:28.580674    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:28.580686    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:28.596659    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:28.596673    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:28.609674    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:28.609686    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:25.017024    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:25.017283    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:25.036017    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:29:25.036116    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:25.050356    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:29:25.050426    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:25.062728    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:29:25.062804    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:25.073241    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:29:25.073307    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:25.097518    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:29:25.097595    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:25.122366    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:29:25.122428    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:25.136434    8642 logs.go:276] 0 containers: []
	W0731 12:29:25.136446    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:25.136500    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:25.146974    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:29:25.146993    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:29:25.146999    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:29:25.160789    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:29:25.160798    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:29:25.172298    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:25.172311    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:25.176237    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:29:25.176245    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:29:25.198378    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:29:25.198393    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:29:25.216614    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:29:25.216624    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:29:25.228343    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:29:25.228358    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:29:25.242532    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:25.242543    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:25.266768    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:29:25.266779    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:25.278492    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:29:25.278506    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:29:25.293959    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:25.293973    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:25.332311    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:29:25.332324    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:29:25.346749    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:29:25.346759    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:29:25.359378    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:29:25.359388    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:29:25.372031    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:29:25.372045    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:29:25.389334    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:29:25.389348    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:29:25.400481    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:25.400493    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:29:25.425000    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:25.425094    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:25.426242    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:25.426248    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:29:25.426270    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:29:25.426273    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:25.426277    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:25.426280    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:25.426283    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:31.134851    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:36.137066    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:36.137184    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:36.164842    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:36.164921    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:36.176102    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:36.176172    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:36.186706    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:36.186777    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:36.196965    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:36.197025    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:36.207492    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:36.207564    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:36.218338    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:36.218398    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:36.233324    8653 logs.go:276] 0 containers: []
	W0731 12:29:36.233339    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:36.233398    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:36.251730    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:36.251746    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:36.251752    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:36.256182    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:36.256189    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:36.291217    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:36.291229    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:36.312843    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:36.312854    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:36.337262    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:36.337272    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:36.349590    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:36.349599    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:36.363595    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:36.363606    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:36.374481    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:36.374493    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:36.388052    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:36.388064    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:36.399336    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:36.399347    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:36.413664    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:36.413675    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:36.424965    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:36.424977    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:36.435867    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:36.435878    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:36.447834    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:36.447847    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:36.487124    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:36.487135    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:36.501006    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:36.501019    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:36.512594    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:36.512605    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:36.524769    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:36.524779    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:36.537189    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:36.537199    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:35.430156    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:39.053582    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:40.432771    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:40.433298    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:40.470649    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:29:40.470807    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:40.491705    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:29:40.491816    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:40.505942    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:29:40.506024    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:40.519450    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:29:40.519519    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:40.532868    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:29:40.532938    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:40.543428    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:29:40.543495    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:40.553933    8642 logs.go:276] 0 containers: []
	W0731 12:29:40.553948    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:40.554010    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:40.569342    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:29:40.569363    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:29:40.569369    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:29:40.580499    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:29:40.580509    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:29:40.592069    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:29:40.592083    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:29:40.609876    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:29:40.609889    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:29:40.627250    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:40.627260    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:29:40.653062    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:40.653155    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:40.654297    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:29:40.654304    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:29:40.669752    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:29:40.669765    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:29:40.684944    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:29:40.684957    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:29:40.696352    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:29:40.696362    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:29:40.707329    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:40.707339    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:40.732621    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:40.732637    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:40.767561    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:29:40.767573    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:29:40.780269    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:29:40.780282    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:29:40.802444    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:29:40.802456    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:40.813793    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:40.813802    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:40.817894    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:29:40.817900    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:29:40.831712    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:29:40.831728    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:29:40.846676    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:40.846687    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:29:40.846714    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:29:40.846723    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:40.846728    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:40.846733    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:40.846735    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:44.055694    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:44.055837    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:44.072961    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:44.073046    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:44.086157    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:44.086230    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:44.097723    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:44.097792    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:44.108750    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:44.108815    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:44.119716    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:44.119784    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:44.129941    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:44.130016    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:44.140470    8653 logs.go:276] 0 containers: []
	W0731 12:29:44.140481    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:44.140538    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:44.151934    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:44.151949    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:44.151955    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:44.163949    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:44.163961    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:44.168532    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:44.168542    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:44.183634    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:44.183644    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:44.194897    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:44.194908    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:44.212552    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:44.212562    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:44.224012    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:44.224023    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:44.239240    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:44.239252    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:44.279194    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:44.279206    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:44.292893    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:44.292902    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:44.304257    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:44.304268    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:44.319373    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:44.319384    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:44.356581    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:44.356591    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:44.371485    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:44.371498    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:44.383634    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:44.383647    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:44.407653    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:44.407664    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:44.421372    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:44.421383    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:44.433368    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:44.433384    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:44.445402    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:44.445413    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:46.958698    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:51.960804    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:51.960978    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:51.978513    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:51.978607    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:51.991421    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:51.991489    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:52.004379    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:52.004451    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:52.015344    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:52.015418    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:52.026224    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:52.026297    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:52.036774    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:52.036847    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:52.047183    8653 logs.go:276] 0 containers: []
	W0731 12:29:52.047193    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:52.047254    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:52.062537    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:52.062551    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:29:52.062557    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:29:52.075050    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:29:52.075060    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:29:52.089557    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:29:52.089571    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:29:52.101412    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:29:52.101423    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:29:52.114251    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:29:52.114266    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:29:52.126017    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:52.126030    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:52.161970    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:29:52.161982    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:29:52.176706    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:29:52.176722    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:29:52.188986    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:52.188998    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:52.215081    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:52.215091    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:52.219524    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:29:52.219533    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:29:52.233307    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:29:52.233317    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:29:52.245520    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:29:52.245535    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:29:52.263671    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:29:52.263683    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:52.276932    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:52.276948    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:29:52.319392    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:29:52.319400    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:29:52.330867    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:29:52.330880    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:29:52.342734    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:29:52.342744    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:29:52.354631    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:29:52.354646    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:29:50.850576    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:54.868923    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:55.852786    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:55.853001    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:55.878213    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:29:55.878335    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:55.894348    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:29:55.894433    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:55.910076    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:29:55.910158    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:55.921840    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:29:55.921909    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:55.932357    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:29:55.932422    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:55.942949    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:29:55.943023    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:55.979453    8642 logs.go:276] 0 containers: []
	W0731 12:29:55.979466    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:55.979530    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:55.993059    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:29:55.993077    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:55.993083    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:55.997776    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:29:55.997782    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:29:56.017970    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:29:56.017981    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:29:56.030711    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:29:56.030725    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:29:56.042160    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:29:56.042173    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:29:56.060335    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:29:56.060348    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:56.073298    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:29:56.073308    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:29:56.087702    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:29:56.087714    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:29:56.100232    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:56.100243    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:56.123539    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:56.123547    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:56.162202    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:29:56.162212    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:29:56.176007    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:29:56.176019    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:29:56.198128    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:29:56.198138    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:29:56.212224    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:29:56.212235    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:29:56.230739    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:29:56.230751    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:29:56.242837    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:56.242848    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:29:56.268837    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:56.268934    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:56.270121    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:29:56.270127    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:29:56.281813    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:56.281822    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:29:56.281849    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:29:56.281853    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:56.281858    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:56.281862    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:56.281865    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:59.871005    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:59.871110    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:59.883767    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:29:59.883847    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:59.895576    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:29:59.895653    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:59.907152    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:29:59.907225    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:59.922450    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:29:59.922513    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:59.933530    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:29:59.933595    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:59.944718    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:29:59.944807    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:59.955256    8653 logs.go:276] 0 containers: []
	W0731 12:29:59.955268    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:59.955331    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:59.966288    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:29:59.966307    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:59.966313    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:00.006311    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:00.006326    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:00.010845    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:00.010855    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:00.025958    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:00.025971    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:00.049875    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:00.049888    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:00.065264    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:00.065275    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:00.078281    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:00.078294    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:00.121954    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:00.121966    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:00.133818    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:00.133829    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:00.145979    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:00.145991    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:00.163914    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:00.163928    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:00.175924    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:00.175937    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:00.190041    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:00.190052    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:00.215345    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:00.215356    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:00.228438    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:00.228451    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:00.240303    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:00.240313    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:00.251969    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:00.251981    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:00.263388    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:00.263399    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:00.280642    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:00.280656    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:02.794039    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:07.796146    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:07.796404    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:07.827764    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:07.827891    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:07.852953    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:07.853027    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:07.865227    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:07.865294    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:07.875955    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:07.876032    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:07.886980    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:07.887052    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:07.898871    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:07.898935    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:07.909194    8653 logs.go:276] 0 containers: []
	W0731 12:30:07.909204    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:07.909256    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:07.919976    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:07.919989    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:07.919994    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:07.956036    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:07.956050    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:07.971340    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:07.971350    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:07.983558    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:07.983569    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:07.994742    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:07.994752    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:07.999010    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:07.999016    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:08.012785    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:08.012795    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:08.026036    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:08.026052    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:08.067790    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:08.067805    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:08.081990    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:08.082000    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:08.095702    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:08.095714    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:08.113174    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:08.113186    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:08.124973    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:08.124984    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:08.150354    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:08.150372    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:08.164103    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:08.164113    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:08.175588    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:08.175603    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:08.187223    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:08.187238    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:08.198397    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:08.198411    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:08.211058    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:08.211072    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:06.285714    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:10.726727    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:11.287777    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:11.287922    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:11.301550    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:30:11.301636    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:11.316848    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:30:11.316921    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:11.327076    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:30:11.327159    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:11.337531    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:30:11.337603    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:11.349407    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:30:11.349478    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:11.360020    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:30:11.360090    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:11.371151    8642 logs.go:276] 0 containers: []
	W0731 12:30:11.371164    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:11.371226    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:11.381815    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:30:11.381835    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:30:11.381840    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:11.393504    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:30:11.393516    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:30:11.407214    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:30:11.407225    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:30:11.425617    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:30:11.425631    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:30:11.445728    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:30:11.445739    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:30:11.463381    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:30:11.463391    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:30:11.485703    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:30:11.485714    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:30:11.502839    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:30:11.502850    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:30:11.514193    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:11.514203    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:30:11.540659    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:11.540755    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:11.541935    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:11.541939    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:11.577903    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:30:11.577916    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:30:11.592816    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:30:11.592826    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:30:11.615570    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:30:11.615580    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:30:11.633233    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:11.633245    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:11.637830    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:30:11.637838    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:30:11.649585    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:30:11.649600    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:30:11.660986    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:11.661000    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:11.686029    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:11.686038    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:30:11.686065    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:30:11.686069    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:11.686073    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:11.686077    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:11.686082    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:30:15.728906    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:15.729086    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:15.739943    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:15.740017    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:15.751287    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:15.751348    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:15.761830    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:15.761906    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:15.772743    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:15.772811    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:15.782619    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:15.782681    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:15.793040    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:15.793110    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:15.803399    8653 logs.go:276] 0 containers: []
	W0731 12:30:15.803411    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:15.803469    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:15.813654    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:15.813670    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:15.813675    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:15.827055    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:15.827067    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:15.838982    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:15.838995    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:15.854570    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:15.854586    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:15.871435    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:15.871445    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:15.882661    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:15.882672    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:15.905348    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:15.905356    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:15.944455    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:15.944468    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:15.958635    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:15.958646    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:15.971247    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:15.971259    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:15.983239    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:15.983252    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:15.994394    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:15.994405    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:16.029441    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:16.029452    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:16.041168    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:16.041179    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:16.053091    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:16.053102    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:16.070228    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:16.070239    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:16.082150    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:16.082163    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:16.094361    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:16.094372    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:16.098760    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:16.098766    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:18.622437    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:23.624684    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:23.624921    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:23.650986    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:23.651111    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:23.668476    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:23.668565    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:23.682113    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:23.682185    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:23.694262    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:23.694337    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:23.704507    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:23.704577    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:23.715136    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:23.715216    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:23.726502    8653 logs.go:276] 0 containers: []
	W0731 12:30:23.726515    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:23.726573    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:23.736874    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:23.736892    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:23.736897    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:23.779030    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:23.779038    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:23.790656    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:23.790666    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:23.802101    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:23.802116    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:23.848229    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:23.848240    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:23.862287    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:23.862298    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:21.689881    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:23.875805    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:23.875816    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:23.886942    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:23.886953    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:23.909466    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:23.909473    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:23.921430    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:23.921441    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:23.925707    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:23.925714    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:23.945830    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:23.945843    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:23.960578    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:23.960588    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:23.971943    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:23.971956    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:23.990254    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:23.990265    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:24.001879    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:24.001890    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:24.014335    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:24.014346    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:24.025830    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:24.025842    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:24.038623    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:24.038633    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:26.551632    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:26.690208    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:26.690440    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:26.732863    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:30:26.732966    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:26.755303    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:30:26.755382    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:26.766661    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:30:26.766736    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:26.778389    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:30:26.778462    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:26.796284    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:30:26.796353    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:26.806667    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:30:26.806737    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:26.817032    8642 logs.go:276] 0 containers: []
	W0731 12:30:26.817045    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:26.817103    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:26.828099    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:30:26.828119    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:30:26.828125    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:30:26.842545    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:30:26.842559    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:30:26.856815    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:30:26.856825    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:30:26.868219    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:26.868230    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:30:26.894324    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:26.894415    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:26.895546    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:26.895552    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:26.933339    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:30:26.933350    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:30:26.946319    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:30:26.946331    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:30:26.960641    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:26.960652    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:26.964911    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:30:26.964920    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:30:26.982902    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:30:26.982915    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:30:27.002351    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:30:27.002361    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:30:27.015950    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:30:27.015960    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:30:27.037309    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:30:27.037324    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:30:27.049301    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:27.049315    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:27.074390    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:30:27.074399    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:30:27.089150    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:30:27.089161    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:30:27.100555    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:30:27.100566    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:27.112582    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:27.112592    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:30:27.112618    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:30:27.112638    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:27.112642    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:27.112646    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:27.112649    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:30:31.553959    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:31.554349    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:31.587711    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:31.587847    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:31.607207    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:31.607309    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:31.621245    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:31.621329    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:31.633507    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:31.633592    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:31.644433    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:31.644507    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:31.656479    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:31.656554    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:31.667080    8653 logs.go:276] 0 containers: []
	W0731 12:30:31.667090    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:31.667152    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:31.678030    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:31.678046    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:31.678051    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:31.682947    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:31.682954    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:31.719959    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:31.719972    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:31.734756    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:31.734765    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:31.746616    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:31.746629    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:31.759030    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:31.759041    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:31.775659    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:31.775671    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:31.817568    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:31.817576    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:31.833841    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:31.833852    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:31.845933    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:31.845944    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:31.858728    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:31.858737    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:31.871469    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:31.871479    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:31.887708    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:31.887718    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:31.899252    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:31.899263    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:31.911258    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:31.911268    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:31.928410    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:31.928420    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:31.939678    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:31.939689    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:31.964058    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:31.964066    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:31.978855    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:31.978864    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:34.491593    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:37.116261    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:39.493456    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:39.493666    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:39.517988    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:39.518117    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:39.534777    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:39.534865    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:39.547829    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:39.547907    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:39.567337    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:39.567409    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:39.578701    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:39.578777    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:39.589552    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:39.589625    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:39.600027    8653 logs.go:276] 0 containers: []
	W0731 12:30:39.600039    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:39.600107    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:39.610995    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:39.611012    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:39.611019    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:39.615366    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:39.615372    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:39.627064    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:39.627077    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:39.637939    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:39.637950    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:39.650020    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:39.650029    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:39.664103    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:39.664114    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:39.704242    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:39.704253    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:39.722682    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:39.722693    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:39.734004    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:39.734016    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:39.745665    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:39.745677    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:39.757576    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:39.757587    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:39.775731    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:39.775741    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:39.800138    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:39.800146    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:39.813541    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:39.813554    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:39.851945    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:39.851961    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:39.867576    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:39.867587    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:39.879317    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:39.879329    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:39.890670    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:39.890682    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:39.902148    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:39.902159    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:42.418125    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:42.118971    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:42.119348    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:42.152756    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:30:42.152895    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:42.173292    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:30:42.173390    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:42.187709    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:30:42.187790    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:42.199944    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:30:42.200036    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:42.210072    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:30:42.210140    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:42.220559    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:30:42.220627    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:42.231423    8642 logs.go:276] 0 containers: []
	W0731 12:30:42.231433    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:42.231491    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:42.241744    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:30:42.241763    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:30:42.241769    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:30:42.255090    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:30:42.255101    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:30:42.268975    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:30:42.268988    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:30:42.284608    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:30:42.284619    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:30:42.296177    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:42.296187    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:30:42.321417    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:42.321509    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:42.322624    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:42.322629    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:42.326555    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:42.326563    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:42.360879    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:30:42.360891    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:30:42.382214    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:30:42.382235    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:30:42.396497    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:30:42.396509    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:42.409277    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:30:42.409290    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:30:42.424528    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:30:42.424540    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:30:42.438978    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:30:42.438989    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:30:42.450949    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:30:42.450960    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:30:42.468210    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:42.468220    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:42.492075    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:30:42.492086    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:30:42.506238    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:30:42.506251    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:30:42.528679    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:42.528693    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:30:42.528725    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:30:42.528729    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:42.528734    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:42.528738    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:42.528741    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:30:47.420185    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:47.420418    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:47.440376    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:47.440475    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:47.453980    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:47.454053    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:47.465547    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:47.465625    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:47.476215    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:47.476281    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:47.487092    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:47.487163    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:47.497220    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:47.497285    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:47.507374    8653 logs.go:276] 0 containers: []
	W0731 12:30:47.507384    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:47.507436    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:47.517343    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:47.517363    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:47.517374    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:47.552850    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:47.552863    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:47.564217    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:47.564228    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:47.577138    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:47.577150    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:47.588948    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:47.588958    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:47.601091    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:47.601103    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:47.642024    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:47.642035    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:47.646351    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:47.646357    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:47.660123    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:47.660138    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:47.674659    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:47.674669    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:47.691664    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:47.691675    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:47.717116    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:47.717126    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:47.729449    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:47.729460    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:47.743941    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:47.743952    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:47.756326    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:47.756337    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:47.768049    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:47.768064    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:47.779923    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:47.779933    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:47.791430    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:47.791441    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:47.803193    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:47.803204    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:50.322345    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:52.532617    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:55.324796    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:55.324998    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:55.341596    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:30:55.341686    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:55.354442    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:30:55.354518    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:55.365933    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:30:55.366011    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:55.377079    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:30:55.377150    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:55.387534    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:30:55.387606    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:55.397553    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:30:55.397621    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:55.407736    8653 logs.go:276] 0 containers: []
	W0731 12:30:55.407748    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:55.407804    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:55.418694    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:30:55.418710    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:30:55.418716    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:30:55.439172    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:30:55.439186    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:30:55.450434    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:30:55.450445    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:30:55.461988    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:55.462000    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:30:55.500809    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:55.500818    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:55.505560    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:30:55.505569    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:30:55.518077    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:30:55.518087    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:30:55.529322    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:30:55.529334    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:30:55.541821    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:30:55.541832    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:30:55.561740    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:55.561751    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:55.584146    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:55.584155    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:55.625333    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:30:55.625345    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:30:55.648901    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:30:55.648913    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:55.660846    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:30:55.660860    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:30:55.675520    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:30:55.675534    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:30:55.690886    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:30:55.690899    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:30:55.702073    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:30:55.702084    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:30:55.713886    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:30:55.713900    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:30:55.727612    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:30:55.727622    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:30:58.244898    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:57.535074    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:57.535256    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:57.560506    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:30:57.560631    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:57.576098    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:30:57.576181    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:57.588966    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:30:57.589047    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:57.600220    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:30:57.600285    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:57.614455    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:30:57.614527    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:57.625428    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:30:57.625502    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:57.635212    8642 logs.go:276] 0 containers: []
	W0731 12:30:57.635222    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:57.635282    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:57.645678    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:30:57.645695    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:57.645701    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:57.650013    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:30:57.650019    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:30:57.664001    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:30:57.664012    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:30:57.675642    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:30:57.675653    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:30:57.697687    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:30:57.697698    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:30:57.714594    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:57.714605    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:57.739330    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:57.739340    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:57.781798    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:30:57.781810    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:30:57.796988    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:30:57.796998    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:30:57.808276    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:30:57.808286    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:57.820243    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:30:57.820256    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:30:57.838491    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:30:57.838502    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:30:57.850031    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:57.850042    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:30:57.875777    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:57.875870    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:57.876978    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:30:57.876982    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:30:57.889590    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:30:57.889601    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:30:57.903076    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:30:57.903090    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:30:57.917191    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:30:57.917206    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:30:57.929499    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:57.929509    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:30:57.929535    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:30:57.929539    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:57.929551    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:57.929555    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:57.929558    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:31:03.247184    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:03.247658    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:03.290440    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:31:03.290583    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:03.311829    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:31:03.311937    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:03.327852    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:31:03.327936    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:03.340107    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:31:03.340178    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:03.351250    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:31:03.351324    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:03.362169    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:31:03.362246    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:03.373022    8653 logs.go:276] 0 containers: []
	W0731 12:31:03.373036    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:03.373092    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:03.384733    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:31:03.384749    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:31:03.384754    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:31:03.404184    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:31:03.404194    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:31:03.415789    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:31:03.415802    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:31:03.431579    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:31:03.431590    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:31:03.443203    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:31:03.443213    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:31:03.455598    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:31:03.455609    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:31:03.467014    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:31:03.467024    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:31:03.479828    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:03.479838    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:03.504477    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:03.504488    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:03.547608    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:03.547625    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:03.583786    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:31:03.583798    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:31:03.596365    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:31:03.596376    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:31:03.610849    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:31:03.610861    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:31:03.622092    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:31:03.622104    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:31:03.636647    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:31:03.636657    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:31:03.648361    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:31:03.648373    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:31:03.665203    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:31:03.665213    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:03.676919    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:03.676930    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:03.681666    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:31:03.681673    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:31:06.196876    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:07.931447    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:11.199495    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:11.199776    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:11.229156    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:31:11.229292    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:11.248258    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:31:11.248366    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:11.263121    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:31:11.263201    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:11.275186    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:31:11.275259    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:11.285906    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:31:11.285976    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:11.297332    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:31:11.297410    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:11.308774    8653 logs.go:276] 0 containers: []
	W0731 12:31:11.308784    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:11.308850    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:11.318971    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:31:11.318986    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:31:11.318992    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:31:11.330311    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:31:11.330324    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:31:11.342535    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:31:11.342546    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:31:11.359922    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:31:11.359936    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:31:11.374561    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:11.374576    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:11.414265    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:11.414275    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:11.449788    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:31:11.449802    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:31:11.463797    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:31:11.463807    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:31:11.475180    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:31:11.475190    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:31:11.485942    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:11.485953    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:11.508091    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:11.508100    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:11.512988    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:31:11.512997    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:31:11.525717    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:31:11.525729    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:31:11.540882    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:31:11.540892    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:31:11.553008    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:31:11.553021    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:11.565027    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:31:11.565037    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:31:11.579764    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:31:11.579780    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:31:11.594882    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:31:11.594896    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:31:11.607010    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:31:11.607022    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:31:12.933735    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:12.933940    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:12.963261    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:31:12.963398    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:12.981759    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:31:12.981845    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:12.995147    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:31:12.995220    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:13.006510    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:31:13.006584    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:13.019399    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:31:13.019463    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:13.030024    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:31:13.030097    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:13.039643    8642 logs.go:276] 0 containers: []
	W0731 12:31:13.039654    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:13.039710    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:13.049933    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:31:13.049959    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:13.049964    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:31:13.076790    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:31:13.076886    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:31:13.078010    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:31:13.078017    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:31:13.095304    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:31:13.095315    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:13.107967    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:31:13.107982    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:31:13.121904    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:31:13.121913    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:31:13.142825    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:31:13.142840    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:31:13.165369    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:31:13.165380    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:31:13.180041    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:13.180051    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:13.204091    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:31:13.204098    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:31:13.224707    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:31:13.224717    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:31:13.236324    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:31:13.236335    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:31:13.256006    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:31:13.256020    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:31:13.275681    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:31:13.275694    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:31:13.287137    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:13.287148    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:13.292179    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:13.292186    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:13.328855    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:31:13.328869    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:31:13.342467    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:31:13.342481    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:31:13.361686    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:31:13.361696    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:31:13.361722    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:31:13.361726    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:31:13.361730    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:31:13.361733    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:31:13.361737    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:31:14.124010    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:19.126125    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:19.126468    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:19.158180    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:31:19.158309    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:19.176137    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:31:19.176233    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:19.190028    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:31:19.190102    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:19.201426    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:31:19.201496    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:19.213150    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:31:19.213220    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:19.224292    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:31:19.224355    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:19.234709    8653 logs.go:276] 0 containers: []
	W0731 12:31:19.234721    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:19.234784    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:19.250562    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:31:19.250579    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:19.250587    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:19.255023    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:31:19.255033    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:31:19.272165    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:31:19.272177    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:31:19.283615    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:31:19.283626    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:31:19.297329    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:31:19.297340    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:31:19.311806    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:31:19.311819    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:31:19.323100    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:31:19.323110    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:31:19.334877    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:31:19.334888    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:31:19.346945    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:19.346955    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:19.370530    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:31:19.370539    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:31:19.388424    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:31:19.388434    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:31:19.399958    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:31:19.399969    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:31:19.411245    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:31:19.411259    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:31:19.422767    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:31:19.422780    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:31:19.434238    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:19.434249    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:19.476393    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:19.476400    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:19.510917    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:31:19.510927    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:31:19.523328    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:31:19.523339    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:31:19.535573    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:31:19.535584    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:22.050692    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:23.365558    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:27.051847    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:27.052104    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:27.074944    8653 logs.go:276] 2 containers: [5c115b5d3034 c5e97c00f1d3]
	I0731 12:31:27.075059    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:27.091568    8653 logs.go:276] 2 containers: [69812d383bef da726d200e7f]
	I0731 12:31:27.091662    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:27.105011    8653 logs.go:276] 2 containers: [52dd566e4fce f88a399a353f]
	I0731 12:31:27.105094    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:27.116394    8653 logs.go:276] 2 containers: [56114e66e87c bed6d227dd72]
	I0731 12:31:27.116470    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:27.126738    8653 logs.go:276] 2 containers: [e0f397434ed6 8d493cbead92]
	I0731 12:31:27.126809    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:27.137166    8653 logs.go:276] 2 containers: [b5e33ce664e7 aa920439f439]
	I0731 12:31:27.137235    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:27.147406    8653 logs.go:276] 0 containers: []
	W0731 12:31:27.147419    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:27.147476    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:27.158220    8653 logs.go:276] 2 containers: [f34007130d2b 41559b97a2f2]
	I0731 12:31:27.158238    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:27.158244    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:27.163003    8653 logs.go:123] Gathering logs for kube-apiserver [c5e97c00f1d3] ...
	I0731 12:31:27.163010    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5e97c00f1d3"
	I0731 12:31:27.175405    8653 logs.go:123] Gathering logs for etcd [69812d383bef] ...
	I0731 12:31:27.175416    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69812d383bef"
	I0731 12:31:27.189174    8653 logs.go:123] Gathering logs for storage-provisioner [f34007130d2b] ...
	I0731 12:31:27.189187    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f34007130d2b"
	I0731 12:31:27.200839    8653 logs.go:123] Gathering logs for kube-apiserver [5c115b5d3034] ...
	I0731 12:31:27.200852    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c115b5d3034"
	I0731 12:31:27.219087    8653 logs.go:123] Gathering logs for coredns [52dd566e4fce] ...
	I0731 12:31:27.219101    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52dd566e4fce"
	I0731 12:31:27.230853    8653 logs.go:123] Gathering logs for coredns [f88a399a353f] ...
	I0731 12:31:27.230864    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f88a399a353f"
	I0731 12:31:27.242797    8653 logs.go:123] Gathering logs for kube-scheduler [56114e66e87c] ...
	I0731 12:31:27.242808    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56114e66e87c"
	I0731 12:31:27.255090    8653 logs.go:123] Gathering logs for kube-scheduler [bed6d227dd72] ...
	I0731 12:31:27.255100    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bed6d227dd72"
	I0731 12:31:27.266385    8653 logs.go:123] Gathering logs for kube-proxy [e0f397434ed6] ...
	I0731 12:31:27.266397    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e0f397434ed6"
	I0731 12:31:27.277646    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:27.277657    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:27.313759    8653 logs.go:123] Gathering logs for etcd [da726d200e7f] ...
	I0731 12:31:27.313770    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da726d200e7f"
	I0731 12:31:27.328867    8653 logs.go:123] Gathering logs for kube-controller-manager [b5e33ce664e7] ...
	I0731 12:31:27.328877    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5e33ce664e7"
	I0731 12:31:27.347054    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:31:27.347064    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:27.359111    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:27.359122    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:31:27.400817    8653 logs.go:123] Gathering logs for kube-proxy [8d493cbead92] ...
	I0731 12:31:27.400827    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d493cbead92"
	I0731 12:31:27.413503    8653 logs.go:123] Gathering logs for kube-controller-manager [aa920439f439] ...
	I0731 12:31:27.413517    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa920439f439"
	I0731 12:31:27.430900    8653 logs.go:123] Gathering logs for storage-provisioner [41559b97a2f2] ...
	I0731 12:31:27.430909    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41559b97a2f2"
	I0731 12:31:27.441897    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:27.441909    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:28.367807    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:28.367850    8642 kubeadm.go:597] duration metric: took 4m14.509327709s to restartPrimaryControlPlane
	W0731 12:31:28.367901    8642 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:31:28.367919    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:31:29.308771    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:31:29.313645    8642 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:31:29.316480    8642 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:31:29.319251    8642 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:31:29.319257    8642 kubeadm.go:157] found existing configuration files:
	
	I0731 12:31:29.319279    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/admin.conf
	I0731 12:31:29.321858    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:31:29.321885    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:31:29.324558    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/kubelet.conf
	I0731 12:31:29.327827    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:31:29.327850    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:31:29.330693    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/controller-manager.conf
	I0731 12:31:29.333142    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:31:29.333167    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:31:29.336141    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/scheduler.conf
	I0731 12:31:29.339236    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:31:29.339256    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:31:29.342038    8642 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:31:29.359965    8642 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:31:29.359996    8642 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:31:29.418543    8642 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:31:29.418627    8642 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:31:29.418673    8642 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:31:29.467736    8642 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:31:29.471779    8642 out.go:204]   - Generating certificates and keys ...
	I0731 12:31:29.471815    8642 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:31:29.471845    8642 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:31:29.471880    8642 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:31:29.471925    8642 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:31:29.471963    8642 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:31:29.472008    8642 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:31:29.473548    8642 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:31:29.473581    8642 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:31:29.473619    8642 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:31:29.473665    8642 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:31:29.473685    8642 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:31:29.473715    8642 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:31:29.616962    8642 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:31:29.801903    8642 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:31:29.914312    8642 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:31:29.981148    8642 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:31:30.009503    8642 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:31:30.009955    8642 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:31:30.009978    8642 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:31:30.093823    8642 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:31:29.966871    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:30.095238    8642 out.go:204]   - Booting up control plane ...
	I0731 12:31:30.095286    8642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:31:30.095344    8642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:31:30.095411    8642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:31:30.095466    8642 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:31:30.095675    8642 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:31:34.595843    8642 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501250 seconds
	I0731 12:31:34.595913    8642 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:31:34.604571    8642 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:31:35.112987    8642 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:31:35.113168    8642 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-155000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:31:35.618896    8642 kubeadm.go:310] [bootstrap-token] Using token: ugsrn3.bj4t8nxbtxr4fh6p
	I0731 12:31:35.624960    8642 out.go:204]   - Configuring RBAC rules ...
	I0731 12:31:35.625026    8642 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:31:35.625075    8642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:31:35.633326    8642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:31:35.634241    8642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:31:35.635472    8642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:31:35.637275    8642 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:31:35.640826    8642 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:31:35.828797    8642 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:31:36.024937    8642 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:31:36.025470    8642 kubeadm.go:310] 
	I0731 12:31:36.025591    8642 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:31:36.025599    8642 kubeadm.go:310] 
	I0731 12:31:36.025638    8642 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:31:36.025645    8642 kubeadm.go:310] 
	I0731 12:31:36.025657    8642 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:31:36.025692    8642 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:31:36.025719    8642 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:31:36.025722    8642 kubeadm.go:310] 
	I0731 12:31:36.025747    8642 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:31:36.025753    8642 kubeadm.go:310] 
	I0731 12:31:36.025776    8642 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:31:36.025779    8642 kubeadm.go:310] 
	I0731 12:31:36.025802    8642 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:31:36.025845    8642 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:31:36.025889    8642 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:31:36.025892    8642 kubeadm.go:310] 
	I0731 12:31:36.025928    8642 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:31:36.025986    8642 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:31:36.025993    8642 kubeadm.go:310] 
	I0731 12:31:36.026038    8642 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ugsrn3.bj4t8nxbtxr4fh6p \
	I0731 12:31:36.026134    8642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:12522336a0e1241307a19e7db623c316cbfcdae7801755c9cc730f6bb46269a5 \
	I0731 12:31:36.026146    8642 kubeadm.go:310] 	--control-plane 
	I0731 12:31:36.026149    8642 kubeadm.go:310] 
	I0731 12:31:36.026201    8642 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:31:36.026206    8642 kubeadm.go:310] 
	I0731 12:31:36.026272    8642 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ugsrn3.bj4t8nxbtxr4fh6p \
	I0731 12:31:36.026321    8642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:12522336a0e1241307a19e7db623c316cbfcdae7801755c9cc730f6bb46269a5 
	I0731 12:31:36.026411    8642 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:31:36.026508    8642 cni.go:84] Creating CNI manager for ""
	I0731 12:31:36.026519    8642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:31:36.030882    8642 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:31:36.039967    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:31:36.043420    8642 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:31:36.050256    8642 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:31:36.050316    8642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:31:36.050334    8642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-155000 minikube.k8s.io/updated_at=2024_07_31T12_31_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=stopped-upgrade-155000 minikube.k8s.io/primary=true
	I0731 12:31:36.099504    8642 kubeadm.go:1113] duration metric: took 49.24275ms to wait for elevateKubeSystemPrivileges
	I0731 12:31:36.099562    8642 ops.go:34] apiserver oom_adj: -16
	I0731 12:31:36.099575    8642 kubeadm.go:394] duration metric: took 4m22.257499708s to StartCluster
	I0731 12:31:36.099586    8642 settings.go:142] acquiring lock: {Name:mkc78613f561547480fb9245c5115e473f0ba2a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:31:36.099735    8642 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:31:36.100113    8642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/kubeconfig: {Name:mk9ca083b2af3b573139e6661162870891ccebe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:31:36.100327    8642 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:31:36.100378    8642 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:31:36.100420    8642 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-155000"
	I0731 12:31:36.100430    8642 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-155000"
	W0731 12:31:36.100435    8642 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:31:36.100445    8642 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-155000"
	I0731 12:31:36.100448    8642 host.go:66] Checking if "stopped-upgrade-155000" exists ...
	I0731 12:31:36.100454    8642 config.go:182] Loaded profile config "stopped-upgrade-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:31:36.100457    8642 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-155000"
	I0731 12:31:36.104968    8642 out.go:177] * Verifying Kubernetes components...
	I0731 12:31:36.105756    8642 kapi.go:59] client config for stopped-upgrade-155000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10227c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:31:36.109198    8642 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-155000"
	W0731 12:31:36.109217    8642 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:31:36.109229    8642 host.go:66] Checking if "stopped-upgrade-155000" exists ...
	I0731 12:31:36.109801    8642 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:31:36.109807    8642 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:31:36.109813    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	I0731 12:31:36.112964    8642 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:31:34.968959    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:34.969042    8653 kubeadm.go:597] duration metric: took 4m5.430321167s to restartPrimaryControlPlane
	W0731 12:31:34.969102    8653 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:31:34.969126    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:31:36.077179    8653 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.108074625s)
	I0731 12:31:36.077238    8653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:31:36.082600    8653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:31:36.085803    8653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:31:36.088707    8653 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:31:36.088716    8653 kubeadm.go:157] found existing configuration files:
	
	I0731 12:31:36.088762    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/admin.conf
	I0731 12:31:36.092454    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:31:36.092509    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:31:36.095897    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/kubelet.conf
	I0731 12:31:36.098912    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:31:36.098959    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:31:36.102631    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/controller-manager.conf
	I0731 12:31:36.105010    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:31:36.105032    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:31:36.107827    8653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/scheduler.conf
	I0731 12:31:36.110847    8653 kubeadm.go:163] "https://control-plane.minikube.internal:51329" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51329 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:31:36.110882    8653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:31:36.113603    8653 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:31:36.131651    8653 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:31:36.131694    8653 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:31:36.180328    8653 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:31:36.180389    8653 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:31:36.180471    8653 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:31:36.232837    8653 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:31:36.238449    8653 out.go:204]   - Generating certificates and keys ...
	I0731 12:31:36.238488    8653 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:31:36.238531    8653 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:31:36.238575    8653 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:31:36.238613    8653 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:31:36.238737    8653 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:31:36.238839    8653 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:31:36.238881    8653 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:31:36.238915    8653 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:31:36.238968    8653 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:31:36.239053    8653 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:31:36.239075    8653 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:31:36.239105    8653 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:31:36.431300    8653 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:31:36.465706    8653 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:31:36.524288    8653 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:31:36.760169    8653 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:31:36.789345    8653 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:31:36.789620    8653 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:31:36.789697    8653 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:31:36.885020    8653 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:31:36.888791    8653 out.go:204]   - Booting up control plane ...
	I0731 12:31:36.888867    8653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:31:36.888909    8653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:31:36.888973    8653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:31:36.889026    8653 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:31:36.889107    8653 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:31:36.116989    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:31:36.121010    8642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:31:36.121021    8642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:31:36.121030    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	I0731 12:31:36.207185    8642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:31:36.213969    8642 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:31:36.214036    8642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:31:36.218743    8642 api_server.go:72] duration metric: took 118.404542ms to wait for apiserver process to appear ...
	I0731 12:31:36.218754    8642 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:31:36.218764    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:36.251044    8642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:31:36.266613    8642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:31:41.390698    8653 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503783 seconds
	I0731 12:31:41.390778    8653 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:31:41.394894    8653 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:31:41.914752    8653 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:31:41.915134    8653 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-992000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:31:42.418233    8653 kubeadm.go:310] [bootstrap-token] Using token: kogid8.knahutph1x867ol2
	I0731 12:31:42.421756    8653 out.go:204]   - Configuring RBAC rules ...
	I0731 12:31:42.421808    8653 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:31:42.421848    8653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:31:42.423891    8653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:31:42.428369    8653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:31:42.429205    8653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:31:42.429945    8653 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:31:42.433058    8653 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:31:42.632834    8653 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:31:42.824894    8653 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:31:42.825322    8653 kubeadm.go:310] 
	I0731 12:31:42.825357    8653 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:31:42.825363    8653 kubeadm.go:310] 
	I0731 12:31:42.825414    8653 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:31:42.825420    8653 kubeadm.go:310] 
	I0731 12:31:42.825436    8653 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:31:42.825468    8653 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:31:42.825502    8653 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:31:42.825507    8653 kubeadm.go:310] 
	I0731 12:31:42.825539    8653 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:31:42.825547    8653 kubeadm.go:310] 
	I0731 12:31:42.825573    8653 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:31:42.825578    8653 kubeadm.go:310] 
	I0731 12:31:42.825605    8653 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:31:42.825652    8653 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:31:42.825699    8653 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:31:42.825702    8653 kubeadm.go:310] 
	I0731 12:31:42.825757    8653 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:31:42.825802    8653 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:31:42.825806    8653 kubeadm.go:310] 
	I0731 12:31:42.825853    8653 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kogid8.knahutph1x867ol2 \
	I0731 12:31:42.825916    8653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:12522336a0e1241307a19e7db623c316cbfcdae7801755c9cc730f6bb46269a5 \
	I0731 12:31:42.825928    8653 kubeadm.go:310] 	--control-plane 
	I0731 12:31:42.825931    8653 kubeadm.go:310] 
	I0731 12:31:42.825973    8653 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:31:42.825976    8653 kubeadm.go:310] 
	I0731 12:31:42.826018    8653 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kogid8.knahutph1x867ol2 \
	I0731 12:31:42.826070    8653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:12522336a0e1241307a19e7db623c316cbfcdae7801755c9cc730f6bb46269a5 
	I0731 12:31:42.826127    8653 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:31:42.826135    8653 cni.go:84] Creating CNI manager for ""
	I0731 12:31:42.826143    8653 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:31:42.829893    8653 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:31:42.836905    8653 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:31:42.839796    8653 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:31:42.844502    8653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:31:42.844543    8653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:31:42.844563    8653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-992000 minikube.k8s.io/updated_at=2024_07_31T12_31_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=running-upgrade-992000 minikube.k8s.io/primary=true
	I0731 12:31:42.897899    8653 ops.go:34] apiserver oom_adj: -16
	I0731 12:31:42.897946    8653 kubeadm.go:1113] duration metric: took 53.439417ms to wait for elevateKubeSystemPrivileges
	I0731 12:31:42.897974    8653 kubeadm.go:394] duration metric: took 4m13.373701792s to StartCluster
	I0731 12:31:42.897985    8653 settings.go:142] acquiring lock: {Name:mkc78613f561547480fb9245c5115e473f0ba2a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:31:42.898068    8653 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:31:42.898478    8653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/kubeconfig: {Name:mk9ca083b2af3b573139e6661162870891ccebe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:31:42.898678    8653 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:31:42.898689    8653 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:31:42.898726    8653 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-992000"
	I0731 12:31:42.898756    8653 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-992000"
	W0731 12:31:42.898761    8653 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:31:42.898761    8653 config.go:182] Loaded profile config "running-upgrade-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:31:42.898772    8653 host.go:66] Checking if "running-upgrade-992000" exists ...
	I0731 12:31:42.898795    8653 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-992000"
	I0731 12:31:42.898807    8653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-992000"
	I0731 12:31:42.899682    8653 kapi.go:59] client config for running-upgrade-992000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/running-upgrade-992000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105dc81b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:31:42.899803    8653 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-992000"
	W0731 12:31:42.899807    8653 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:31:42.899819    8653 host.go:66] Checking if "running-upgrade-992000" exists ...
	I0731 12:31:42.902824    8653 out.go:177] * Verifying Kubernetes components...
	I0731 12:31:42.903169    8653 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:31:42.906974    8653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:31:42.906981    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	I0731 12:31:42.910742    8653 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:31:42.914781    8653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:31:42.918786    8653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:31:42.918793    8653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:31:42.918799    8653 sshutil.go:53] new ssh client: &{IP:localhost Port:51265 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/running-upgrade-992000/id_rsa Username:docker}
	I0731 12:31:43.008732    8653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:31:43.014444    8653 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:31:43.014491    8653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:31:43.018323    8653 api_server.go:72] duration metric: took 119.63725ms to wait for apiserver process to appear ...
	I0731 12:31:43.018331    8653 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:31:43.018337    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:43.047462    8653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:31:43.069759    8653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:31:41.220738    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:41.220770    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:48.018328    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:48.018351    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:46.220846    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:46.220869    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:53.020054    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:53.020074    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:51.221025    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:51.221044    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:58.020071    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:58.020091    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:56.221410    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:56.221447    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:03.020174    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:03.020204    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:01.221833    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:01.221874    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:06.222478    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:06.222515    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:32:06.628483    8642 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:32:06.632827    8642 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:32:08.020348    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:08.020375    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:06.639593    8642 addons.go:510] duration metric: took 30.5402645s for enable addons: enabled=[storage-provisioner]
	I0731 12:32:13.020630    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:13.020679    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:32:13.373151    8653 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:32:13.377208    8653 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:32:13.389247    8653 addons.go:510] duration metric: took 30.491599167s for enable addons: enabled=[storage-provisioner]
	I0731 12:32:11.223650    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:11.223688    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:18.021194    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:18.021283    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:16.224824    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:16.224863    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:23.021973    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:23.022006    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:21.226315    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:21.226402    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:28.022834    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:28.022874    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:26.227829    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:26.227869    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:33.024054    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:33.024097    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:31.229915    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:31.229937    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:38.025525    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:38.025583    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:36.231931    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:36.232041    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:36.242668    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:32:36.242748    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:36.252715    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:32:36.252787    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:36.263261    8642 logs.go:276] 2 containers: [39ab3fb165fd 60a1ac98657e]
	I0731 12:32:36.263325    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:36.274074    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:32:36.274149    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:36.284378    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:32:36.284453    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:36.294907    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:32:36.294970    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:36.305032    8642 logs.go:276] 0 containers: []
	W0731 12:32:36.305043    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:36.305096    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:36.315645    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:32:36.315665    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:32:36.315671    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:32:36.337651    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:32:36.337663    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:32:36.348935    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:32:36.348946    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:32:36.364128    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:32:36.364139    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:32:36.381795    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:32:36.381805    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:32:36.395868    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:36.395879    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:36.400305    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:36.400315    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:36.440454    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:32:36.440468    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:32:36.453157    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:32:36.453171    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:32:36.464985    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:32:36.464998    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:32:36.480873    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:36.480886    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:36.505936    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:32:36.505944    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:36.517026    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:36.517037    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:32:36.532390    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:32:36.532486    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:32:36.549947    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:32:36.549955    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:32:36.549983    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:32:36.549988    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:32:36.549993    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:32:36.549996    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:32:36.549999    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:32:43.027403    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:43.027511    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:43.041034    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:32:43.041123    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:43.053340    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:32:43.053414    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:43.063985    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:32:43.064061    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:43.074190    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:32:43.074261    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:43.084900    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:32:43.084967    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:43.095566    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:32:43.095634    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:43.105813    8653 logs.go:276] 0 containers: []
	W0731 12:32:43.105824    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:43.105880    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:43.116110    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:32:43.116125    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:43.116133    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:43.120914    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:43.120921    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:43.157433    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:32:43.157444    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:32:43.172203    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:32:43.172216    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:32:43.188614    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:32:43.188625    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:32:43.200385    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:32:43.200396    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:32:43.212526    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:43.212537    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:43.237556    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:32:43.237567    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:43.249213    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:43.249223    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:43.283968    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:32:43.283981    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:32:43.295112    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:32:43.295122    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:32:43.309998    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:32:43.310008    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:32:43.330118    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:32:43.330131    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:32:45.843873    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:46.553128    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:50.845974    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:50.846139    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:50.861301    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:32:50.861391    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:50.873775    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:32:50.873853    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:50.884658    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:32:50.884732    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:50.894696    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:32:50.894764    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:50.905165    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:32:50.905242    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:50.915923    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:32:50.915995    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:50.926302    8653 logs.go:276] 0 containers: []
	W0731 12:32:50.926312    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:50.926377    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:50.936552    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:32:50.936567    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:32:50.936574    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:32:50.950329    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:32:50.950343    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:32:50.961636    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:32:50.961649    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:32:50.977770    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:32:50.977784    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:32:50.990850    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:50.990861    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:51.016931    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:51.016939    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:51.052395    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:51.052402    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:51.057103    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:51.057113    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:51.093852    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:32:51.093864    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:32:51.108800    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:32:51.108812    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:32:51.120429    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:32:51.120440    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:32:51.132542    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:32:51.132554    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:32:51.150004    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:32:51.150016    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:53.664620    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:51.555230    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:51.555353    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:51.567545    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:32:51.567657    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:51.578453    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:32:51.578524    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:51.589022    8642 logs.go:276] 2 containers: [39ab3fb165fd 60a1ac98657e]
	I0731 12:32:51.589094    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:51.599739    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:32:51.599815    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:51.610061    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:32:51.610134    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:51.620783    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:32:51.620853    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:51.630950    8642 logs.go:276] 0 containers: []
	W0731 12:32:51.630960    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:51.631019    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:51.641244    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:32:51.641259    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:51.641264    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:32:51.658326    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:32:51.658421    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:32:51.676149    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:51.676160    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:51.711163    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:32:51.711174    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:32:51.725409    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:32:51.725422    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:32:51.739993    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:32:51.740006    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:32:51.756657    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:32:51.756670    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:51.768030    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:51.768045    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:51.772459    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:32:51.772467    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:32:51.788751    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:32:51.788761    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:32:51.800074    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:32:51.800088    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:32:51.814405    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:32:51.814415    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:32:51.830120    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:32:51.830131    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:32:51.841480    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:51.841491    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:51.866809    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:32:51.866817    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:32:51.866842    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:32:51.866849    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:32:51.866854    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:32:51.866859    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:32:51.866862    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:32:58.665882    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:58.665998    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:58.678706    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:32:58.678789    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:58.689652    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:32:58.689733    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:58.701991    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:32:58.702063    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:58.715971    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:32:58.716042    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:58.726573    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:32:58.726648    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:58.736910    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:32:58.736977    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:58.746734    8653 logs.go:276] 0 containers: []
	W0731 12:32:58.746745    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:58.746805    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:58.757518    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:32:58.757536    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:58.757543    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:58.762350    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:32:58.762360    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:32:58.776748    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:32:58.776762    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:32:58.788408    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:58.788419    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:58.812578    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:32:58.812585    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:58.824256    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:58.824267    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:32:58.859690    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:32:58.859699    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:32:58.874539    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:32:58.874552    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:32:58.892120    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:32:58.892133    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:32:58.903734    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:32:58.903744    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:32:58.915202    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:32:58.915214    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:32:58.926689    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:32:58.926699    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:32:58.943898    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:58.943910    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:01.481152    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:01.870673    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:06.481451    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:06.481639    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:06.495277    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:06.495354    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:06.506492    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:06.506567    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:06.517281    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:06.517357    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:06.528321    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:06.528395    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:06.547484    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:06.547554    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:06.557871    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:06.557937    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:06.568111    8653 logs.go:276] 0 containers: []
	W0731 12:33:06.568124    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:06.568187    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:06.578765    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:06.578779    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:06.578788    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:06.592690    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:06.592703    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:06.610311    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:06.610321    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:06.635294    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:06.635305    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:06.670258    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:06.670269    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:06.685190    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:06.685203    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:06.698920    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:06.698930    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:06.714362    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:06.714373    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:06.726027    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:06.726040    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:06.738341    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:06.738356    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:06.775347    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:06.775358    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:06.780081    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:06.780091    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:06.792827    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:06.792839    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:06.872802    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:06.872879    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:06.883643    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:33:06.883720    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:06.893849    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:33:06.893915    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:06.904739    8642 logs.go:276] 2 containers: [39ab3fb165fd 60a1ac98657e]
	I0731 12:33:06.904867    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:06.915962    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:33:06.916033    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:06.929941    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:33:06.930008    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:06.940762    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:33:06.940837    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:06.951030    8642 logs.go:276] 0 containers: []
	W0731 12:33:06.951039    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:06.951092    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:06.961898    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:33:06.961913    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:33:06.961919    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:33:06.979017    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:33:06.979030    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:06.990813    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:33:06.990824    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:33:07.002707    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:33:07.002721    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:33:07.017604    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:07.017613    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:07.053881    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:33:07.053893    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:33:07.069059    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:33:07.069073    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:33:07.084041    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:33:07.084057    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:33:07.098542    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:33:07.098558    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:33:07.112897    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:33:07.112910    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:33:07.124758    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:07.124770    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:33:07.140356    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:07.140453    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:07.157677    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:07.157684    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:07.162238    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:07.162249    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:07.186719    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:07.186729    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:33:07.186756    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:33:07.186761    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:07.186782    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:07.186787    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:07.186790    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:33:09.306485    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:14.306971    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:14.307208    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:14.324256    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:14.324341    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:14.337467    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:14.337549    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:14.348289    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:14.348362    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:14.364829    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:14.364902    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:14.375514    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:14.375587    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:14.385930    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:14.385997    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:14.396260    8653 logs.go:276] 0 containers: []
	W0731 12:33:14.396272    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:14.396336    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:14.406752    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:14.406773    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:14.406778    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:14.423977    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:14.423989    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:14.435508    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:14.435520    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:14.461644    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:14.461660    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:14.499232    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:14.499245    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:14.504202    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:14.504210    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:14.518048    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:14.518060    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:14.529809    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:14.529820    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:14.544912    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:14.544925    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:14.556836    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:14.556848    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:14.599154    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:14.599164    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:14.617335    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:14.617347    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:14.629016    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:14.629027    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:17.147294    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:17.189742    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:22.149419    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:22.149616    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:22.161848    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:22.161928    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:22.172654    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:22.172733    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:22.183422    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:22.183497    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:22.194544    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:22.194615    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:22.208840    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:22.208912    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:22.220466    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:22.220547    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:22.232097    8653 logs.go:276] 0 containers: []
	W0731 12:33:22.232108    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:22.232184    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:22.243314    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:22.243331    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:22.243337    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:22.269565    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:22.269580    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:22.307480    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:22.307495    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:22.321042    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:22.321051    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:22.333906    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:22.333914    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:22.353381    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:22.353395    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:22.370522    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:22.370535    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:22.387136    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:22.387149    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:22.399636    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:22.399648    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:22.411626    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:22.411637    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:22.416639    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:22.416651    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:22.453994    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:22.454005    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:22.469297    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:22.469308    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:22.191729    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:22.191844    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:22.204413    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:33:22.204484    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:22.216494    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:33:22.216572    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:22.227953    8642 logs.go:276] 2 containers: [39ab3fb165fd 60a1ac98657e]
	I0731 12:33:22.228032    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:22.239274    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:33:22.239349    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:22.250778    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:33:22.250847    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:22.261795    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:33:22.261867    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:22.272388    8642 logs.go:276] 0 containers: []
	W0731 12:33:22.272399    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:22.272464    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:22.283662    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:33:22.283678    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:33:22.283683    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:33:22.295110    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:22.295120    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:22.320911    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:33:22.320923    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:22.333890    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:22.333901    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:22.338584    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:22.338595    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:22.376979    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:33:22.376991    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:33:22.394262    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:33:22.394275    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:33:22.409434    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:33:22.409451    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:33:22.428726    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:33:22.428743    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:33:22.441688    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:22.441701    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:33:22.457648    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:22.457749    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:22.475735    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:33:22.475751    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:33:22.488376    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:33:22.488387    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:33:22.515331    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:33:22.515342    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:33:22.527055    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:22.527067    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:33:22.527094    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:33:22.527099    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:22.527105    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:22.527111    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:22.527114    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:33:24.988747    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:29.990772    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:29.990913    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:30.007812    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:30.007908    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:30.021492    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:30.021564    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:30.037832    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:30.037917    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:30.049228    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:30.049309    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:30.060274    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:30.060340    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:30.071121    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:30.071189    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:30.081981    8653 logs.go:276] 0 containers: []
	W0731 12:33:30.081995    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:30.082067    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:30.093109    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:30.093123    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:30.093129    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:30.097775    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:30.097784    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:30.112708    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:30.112721    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:30.133394    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:30.133408    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:30.148587    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:30.148597    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:30.160705    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:30.160717    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:30.172671    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:30.172683    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:30.208288    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:30.208299    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:30.227056    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:30.227067    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:30.239300    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:30.239313    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:30.254094    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:30.254105    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:30.272483    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:30.272495    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:30.297197    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:30.297208    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:32.837010    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:32.530910    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:37.837251    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:37.837427    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:37.849652    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:37.849733    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:37.860879    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:37.860956    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:37.874136    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:37.874208    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:37.886696    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:37.886770    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:37.897329    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:37.897413    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:37.907982    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:37.908052    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:37.918012    8653 logs.go:276] 0 containers: []
	W0731 12:33:37.918029    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:37.918097    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:37.929255    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:37.929269    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:37.929275    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:37.934287    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:37.934295    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:37.973388    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:37.973399    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:37.988446    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:37.988460    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:38.003260    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:38.003274    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:38.015469    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:38.015481    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:38.039101    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:38.039114    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:38.050793    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:38.050809    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:38.084572    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:38.084589    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:38.096515    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:38.096526    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:38.108065    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:38.108077    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:38.125454    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:38.125464    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:38.137593    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:38.137603    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:37.531241    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:37.531454    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:37.549529    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:33:37.549620    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:37.562535    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:33:37.562617    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:37.573405    8642 logs.go:276] 2 containers: [39ab3fb165fd 60a1ac98657e]
	I0731 12:33:37.573475    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:37.584033    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:33:37.584105    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:37.597969    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:33:37.598050    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:37.608145    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:33:37.608210    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:37.618043    8642 logs.go:276] 0 containers: []
	W0731 12:33:37.618052    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:37.618108    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:37.629660    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:33:37.629674    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:33:37.629679    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:33:37.641779    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:33:37.641791    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:33:37.657191    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:33:37.657203    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:33:37.669067    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:33:37.669079    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:33:37.686099    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:37.686108    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:33:37.699336    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:37.699429    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:37.716921    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:37.716931    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:37.721312    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:37.721320    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:37.755033    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:33:37.755043    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:33:37.769361    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:33:37.769370    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:33:37.780956    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:33:37.780966    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:33:37.801043    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:33:37.801052    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:33:37.812961    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:37.812973    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:37.837149    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:33:37.837162    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:37.853716    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:37.853727    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:33:37.853754    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:33:37.853760    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:37.853764    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:37.853783    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:37.853793    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:33:40.656323    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:45.658465    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:45.658630    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:45.675330    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:45.675436    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:45.693626    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:45.693702    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:45.708579    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:45.708654    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:45.719764    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:45.719839    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:45.729947    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:45.730025    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:45.740836    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:45.740909    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:45.751407    8653 logs.go:276] 0 containers: []
	W0731 12:33:45.751420    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:45.751479    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:45.761899    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:45.761915    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:45.761921    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:45.779945    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:45.779959    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:45.791967    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:45.791978    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:45.796269    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:45.796276    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:45.830208    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:45.830218    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:45.845199    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:45.845212    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:45.857332    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:45.857341    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:45.872322    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:45.872337    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:45.884382    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:45.884392    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:45.920223    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:45.920238    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:45.933787    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:45.933798    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:45.945427    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:45.945440    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:45.956856    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:45.956866    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:48.482072    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:47.856584    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:53.484183    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:53.484352    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:53.501122    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:33:53.501207    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:53.514043    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:33:53.514123    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:53.526116    8653 logs.go:276] 2 containers: [8c40f8041491 7a912ac1efd8]
	I0731 12:33:53.526195    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:53.538075    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:33:53.538145    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:53.548695    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:33:53.548774    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:53.559174    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:33:53.559248    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:53.569510    8653 logs.go:276] 0 containers: []
	W0731 12:33:53.569523    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:53.569589    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:53.580228    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:33:53.580243    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:53.580249    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:33:53.615234    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:33:53.615242    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:33:53.629687    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:33:53.629697    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:33:53.644277    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:33:53.644292    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:33:53.655403    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:53.655415    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:53.678404    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:53.678416    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:53.683089    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:53.683095    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:53.719168    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:33:53.719180    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:33:53.734245    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:33:53.734254    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:33:53.746533    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:33:53.746543    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:33:53.758312    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:33:53.758322    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:33:53.775555    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:33:53.775566    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:33:53.786652    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:33:53.786663    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:52.858757    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:52.858882    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:52.870357    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:33:52.870430    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:52.881743    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:33:52.881809    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:52.892911    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:33:52.892995    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:52.904119    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:33:52.904198    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:52.915194    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:33:52.915270    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:52.930430    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:33:52.930505    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:52.943557    8642 logs.go:276] 0 containers: []
	W0731 12:33:52.943568    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:52.943630    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:52.953735    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:33:52.953752    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:52.953758    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:52.988803    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:33:52.988814    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:33:53.005383    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:33:53.005393    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:33:53.025789    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:33:53.025800    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:33:53.037604    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:33:53.037615    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:53.049536    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:33:53.049551    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:33:53.061855    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:53.061866    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:53.086746    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:53.086753    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:33:53.101330    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:53.101423    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:53.118458    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:33:53.118463    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:33:53.132324    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:33:53.132337    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:33:53.143608    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:33:53.143622    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:33:53.161246    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:53.161256    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:53.165585    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:33:53.165591    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:33:53.179502    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:33:53.179514    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:33:53.191029    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:33:53.191041    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:33:53.202713    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:53.202724    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:33:53.202751    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:33:53.202755    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:53.202759    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:53.202764    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:53.202766    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:33:56.300663    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:01.302779    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:01.302893    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:01.315465    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:01.315541    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:01.326335    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:01.326405    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:01.337331    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:01.337405    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:01.348054    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:01.348124    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:01.358581    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:01.358646    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:01.369388    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:01.369453    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:01.379547    8653 logs.go:276] 0 containers: []
	W0731 12:34:01.379558    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:01.379620    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:01.389999    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:01.390018    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:01.390023    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:01.394915    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:01.394923    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:01.409131    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:01.409140    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:01.420723    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:01.420734    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:01.456163    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:01.456172    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:01.467682    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:01.467694    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:01.485436    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:01.485447    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:01.522823    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:01.522840    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:01.537629    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:01.537642    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:01.549701    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:01.549712    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:01.568637    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:01.568649    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:01.580907    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:01.580918    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:01.595302    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:01.595315    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:01.614455    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:01.614468    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:01.625727    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:01.625741    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:03.206636    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:04.153366    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:08.208827    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:08.208996    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:08.221773    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:34:08.221857    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:08.234355    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:34:08.234432    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:08.244508    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:34:08.244579    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:08.254664    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:34:08.254742    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:08.264769    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:34:08.264831    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:08.275654    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:34:08.275735    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:08.286105    8642 logs.go:276] 0 containers: []
	W0731 12:34:08.286117    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:08.286178    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:08.297207    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:34:08.297226    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:34:08.297231    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:34:08.309070    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:34:08.309080    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:34:08.325367    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:34:08.325384    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:34:08.336964    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:34:08.336977    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:34:08.351122    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:34:08.351137    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:34:08.362674    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:08.362688    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:34:08.377574    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:08.377672    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:08.394764    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:08.394769    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:08.429458    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:34:08.429472    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:34:08.447017    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:34:08.447035    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:08.458528    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:08.458539    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:08.483201    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:08.483215    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:08.487392    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:34:08.487399    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:34:08.501129    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:34:08.501140    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:34:08.514397    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:34:08.514408    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:34:08.526248    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:34:08.526259    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:34:08.546113    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:08.546122    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:34:08.546149    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:34:08.546154    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:08.546158    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:08.546167    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:08.546170    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:34:09.154554    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:09.154700    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:09.166860    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:09.166942    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:09.177759    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:09.177832    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:09.188698    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:09.188779    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:09.200272    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:09.200339    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:09.211060    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:09.211140    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:09.222102    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:09.222174    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:09.232887    8653 logs.go:276] 0 containers: []
	W0731 12:34:09.232898    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:09.232963    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:09.243495    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:09.243513    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:09.243519    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:09.260598    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:09.260611    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:09.272150    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:09.272163    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:09.296801    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:09.296813    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:09.331686    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:09.331695    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:09.369645    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:09.369656    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:09.381417    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:09.381427    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:09.386233    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:09.386239    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:09.401372    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:09.401383    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:09.413263    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:09.413273    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:09.425616    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:09.425626    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:09.437342    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:09.437356    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:09.448854    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:09.448868    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:09.460648    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:09.460659    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:09.485337    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:09.485348    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:12.004079    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:17.006308    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:17.006492    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:17.021045    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:17.021127    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:17.032315    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:17.032389    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:17.047924    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:17.048004    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:17.058720    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:17.058793    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:17.069371    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:17.069460    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:17.080347    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:17.080419    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:17.091619    8653 logs.go:276] 0 containers: []
	W0731 12:34:17.091630    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:17.091694    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:17.107022    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:17.107040    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:17.107045    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:17.118849    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:17.118860    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:17.130886    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:17.130897    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:17.164533    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:17.164544    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:17.179020    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:17.179030    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:17.190144    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:17.190155    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:17.207512    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:17.207522    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:17.242676    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:17.242684    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:17.257651    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:17.257665    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:17.269385    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:17.269398    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:17.280726    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:17.280735    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:17.294746    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:17.294759    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:17.306230    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:17.306244    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:17.331575    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:17.331584    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:17.336085    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:17.336093    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:18.549942    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:19.849691    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:23.552034    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:23.552231    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:23.568759    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:34:23.568848    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:23.586014    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:34:23.586073    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:23.604123    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:34:23.604191    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:23.615341    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:34:23.615408    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:23.626579    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:34:23.626644    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:23.643004    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:34:23.643076    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:23.653798    8642 logs.go:276] 0 containers: []
	W0731 12:34:23.653809    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:23.653866    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:23.664326    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:34:23.664343    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:23.664349    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:23.701393    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:34:23.701406    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:34:23.713339    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:34:23.713351    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:34:23.725216    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:23.725227    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:23.729547    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:34:23.729553    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:34:23.741490    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:34:23.741501    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:34:23.753637    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:23.753647    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:23.778815    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:34:23.778823    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:23.791010    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:34:23.791023    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:34:23.802467    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:34:23.802477    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:34:23.817197    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:34:23.817208    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:34:23.833233    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:23.833245    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:34:23.846659    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:23.846753    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:23.864261    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:34:23.864270    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:34:23.876663    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:34:23.876673    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:34:23.894783    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:34:23.894792    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:34:23.910349    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:23.910358    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:34:23.910383    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:34:23.910387    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:23.910390    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:23.910394    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:23.910399    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:34:24.851736    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:24.851837    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:24.862579    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:24.862658    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:24.873861    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:24.873941    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:24.884494    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:24.884570    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:24.894670    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:24.894744    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:24.905377    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:24.905451    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:24.916064    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:24.916132    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:24.926365    8653 logs.go:276] 0 containers: []
	W0731 12:34:24.926380    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:24.926442    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:24.936807    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:24.936827    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:24.936832    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:24.972528    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:24.972539    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:24.986737    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:24.986751    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:24.998284    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:24.998294    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:25.012260    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:25.012270    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:25.023756    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:25.023771    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:25.048386    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:25.048394    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:25.059911    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:25.059920    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:25.072447    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:25.072464    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:25.084380    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:25.084394    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:25.099406    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:25.099419    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:25.117860    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:25.117871    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:25.130143    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:25.130157    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:25.134482    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:25.134492    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:25.174579    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:25.174590    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:27.686896    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:32.689023    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:32.689147    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:32.700423    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:32.700495    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:32.713266    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:32.713337    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:32.724418    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:32.724494    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:32.735278    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:32.735344    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:32.746073    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:32.746147    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:32.756690    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:32.756754    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:32.766728    8653 logs.go:276] 0 containers: []
	W0731 12:34:32.766739    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:32.766796    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:32.777342    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:32.777360    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:32.777366    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:32.813222    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:32.813239    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:32.824939    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:32.824953    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:32.858934    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:32.858942    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:32.870441    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:32.870453    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:32.886679    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:32.886690    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:32.911292    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:32.911301    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:32.922406    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:32.922417    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:32.934450    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:32.934461    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:32.946334    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:32.946345    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:32.960749    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:32.960759    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:32.979027    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:32.979037    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:32.990343    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:32.990354    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:32.995006    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:32.995013    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:33.010001    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:33.010012    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:33.914168    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:35.530466    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:38.916297    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:38.916513    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:38.951503    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:34:38.951589    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:38.966064    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:34:38.966148    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:38.978197    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:34:38.978272    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:38.989029    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:34:38.989093    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:39.001817    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:34:39.001886    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:39.012472    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:34:39.012534    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:39.023049    8642 logs.go:276] 0 containers: []
	W0731 12:34:39.023060    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:39.023118    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:39.033335    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:34:39.033352    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:39.033357    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:39.037749    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:34:39.037757    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:34:39.049407    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:34:39.049416    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:34:39.061643    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:34:39.061654    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:34:39.080548    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:34:39.080558    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:34:39.092724    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:39.092736    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:34:39.108009    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:39.108103    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:39.125662    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:34:39.125671    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:34:39.140146    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:34:39.140156    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:34:39.152681    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:39.152692    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:39.187337    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:34:39.187349    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:34:39.203339    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:34:39.203348    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:34:39.220841    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:34:39.220852    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:34:39.234272    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:34:39.234281    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:39.245929    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:34:39.245939    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:34:39.259416    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:39.259426    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:39.283258    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:39.283266    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:34:39.283288    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:34:39.283292    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:39.283296    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:39.283300    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:39.283311    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:34:40.532580    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:40.532741    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:40.548900    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:40.548978    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:40.561909    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:40.561983    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:40.573017    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:40.573097    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:40.583205    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:40.583271    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:40.593988    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:40.594054    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:40.604495    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:40.604563    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:40.614527    8653 logs.go:276] 0 containers: []
	W0731 12:34:40.614537    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:40.614588    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:40.625281    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:40.625301    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:40.625305    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:40.641181    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:40.641192    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:40.656192    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:40.656204    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:40.679960    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:40.679973    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:40.684805    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:40.684813    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:40.696303    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:40.696316    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:40.710698    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:40.710709    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:40.728349    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:40.728358    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:40.775695    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:40.775707    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:40.789473    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:40.789487    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:40.803597    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:40.803607    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:40.815686    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:40.815696    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:40.851125    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:40.851135    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:40.865344    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:40.865358    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:40.880671    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:40.880682    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:43.395211    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:48.397332    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:48.397494    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:48.410312    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:48.410395    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:48.421460    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:48.421531    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:48.432236    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:48.432315    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:48.442574    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:48.442638    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:48.453120    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:48.453196    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:48.463560    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:48.463633    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:48.474475    8653 logs.go:276] 0 containers: []
	W0731 12:34:48.474487    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:48.474545    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:48.485089    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:48.485108    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:48.485114    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:48.497033    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:48.497045    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:48.508334    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:48.508344    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:48.520412    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:48.520425    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:48.546053    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:48.546069    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:48.560323    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:48.560334    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:48.574386    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:48.574397    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:48.593354    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:48.593365    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:48.597892    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:48.597900    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:48.632331    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:48.632341    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:48.645538    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:48.645548    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:48.658558    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:48.658569    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:48.693953    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:48.693966    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:48.706676    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:48.706687    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:48.724071    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:48.724082    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:49.285464    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:51.240835    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:54.287527    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:54.287614    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:54.299205    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:34:54.299286    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:54.310364    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:34:54.310439    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:54.320907    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:34:54.320976    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:54.331530    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:34:54.331601    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:54.342392    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:34:54.342459    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:54.353448    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:34:54.353519    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:54.364246    8642 logs.go:276] 0 containers: []
	W0731 12:34:54.364258    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:54.364314    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:54.374517    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:34:54.374534    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:54.374539    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:54.411176    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:34:54.411187    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:34:54.425701    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:34:54.425710    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:34:54.442035    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:34:54.442046    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:34:54.453754    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:34:54.453764    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:34:54.465068    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:54.465077    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:34:54.478351    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:54.478447    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:54.495868    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:34:54.495878    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:34:54.511101    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:34:54.511115    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:34:54.529022    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:54.529034    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:54.533816    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:34:54.533825    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:34:54.546071    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:54.546086    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:54.570591    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:34:54.570603    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:34:54.584366    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:34:54.584376    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:34:54.600267    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:34:54.600277    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:34:54.612349    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:34:54.612360    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:54.624070    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:54.624081    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:34:54.624106    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:34:54.624112    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:54.624116    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:54.624120    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:54.624124    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:34:56.242845    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:56.242933    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:56.254848    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:34:56.254915    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:56.265414    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:34:56.265489    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:56.276457    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:34:56.276530    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:56.286629    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:34:56.286695    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:56.297369    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:34:56.297430    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:56.308164    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:34:56.308227    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:56.318730    8653 logs.go:276] 0 containers: []
	W0731 12:34:56.318740    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:56.318797    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:56.329397    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:34:56.329415    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:56.329421    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:56.365606    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:34:56.365616    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:34:56.380371    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:34:56.380384    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:34:56.392076    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:34:56.392089    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:34:56.404178    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:34:56.404189    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:34:56.415997    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:56.416011    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:56.439658    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:56.439665    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:56.443816    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:34:56.443825    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:34:56.458054    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:34:56.458065    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:34:56.469902    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:34:56.469912    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:34:56.481601    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:34:56.481615    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:34:56.492969    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:34:56.492980    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:34:56.507822    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:34:56.507834    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:34:56.526961    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:34:56.526972    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:56.539165    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:56.539180    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:34:59.074621    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:04.627906    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:04.076405    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:04.076622    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:04.092224    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:35:04.092312    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:04.104409    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:35:04.104490    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:04.115212    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:35:04.115282    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:04.126072    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:35:04.126136    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:04.136891    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:35:04.136955    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:04.147431    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:35:04.147492    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:04.158191    8653 logs.go:276] 0 containers: []
	W0731 12:35:04.158205    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:04.158273    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:04.169269    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:35:04.169287    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:35:04.169293    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:35:04.183306    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:35:04.183317    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:35:04.194773    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:35:04.194783    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:35:04.213236    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:35:04.213245    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:35:04.236367    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:04.236377    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:04.261855    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:35:04.261866    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:04.273623    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:04.273636    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:04.309491    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:35:04.309504    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:35:04.323993    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:35:04.324006    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:35:04.335294    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:35:04.335304    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:35:04.346711    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:35:04.346724    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:35:04.358242    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:04.358255    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:04.392589    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:35:04.392599    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:35:04.407821    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:04.407831    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:04.412500    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:35:04.412507    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:35:06.925804    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:09.629922    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:09.630080    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:09.642563    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:35:09.642641    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:09.658523    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:35:09.658599    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:09.668884    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:35:09.668963    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:09.685617    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:35:09.685686    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:09.695905    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:35:09.695976    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:09.710036    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:35:09.710105    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:09.720345    8642 logs.go:276] 0 containers: []
	W0731 12:35:09.720355    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:09.720407    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:09.731118    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:35:09.731137    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:09.731141    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:09.756719    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:09.756728    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:09.760996    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:35:09.761005    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:35:09.775187    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:35:09.775199    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:35:09.790804    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:35:09.790815    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:35:09.807715    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:09.807728    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:35:09.822669    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:35:09.822764    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:35:09.840531    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:35:09.840542    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:35:11.926775    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:11.926921    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:11.938259    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:35:11.938341    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:11.949075    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:35:11.949148    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:11.959947    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:35:11.960022    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:11.970487    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:35:11.970557    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:11.980782    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:35:11.980856    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:11.996241    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:35:11.996315    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:12.006364    8653 logs.go:276] 0 containers: []
	W0731 12:35:12.006376    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:12.006435    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:12.017166    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:35:12.017184    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:12.017189    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:12.040573    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:35:12.040580    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:35:12.054161    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:35:12.054171    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:35:12.065768    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:35:12.065781    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:35:12.083105    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:35:12.083115    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:35:12.094918    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:35:12.094930    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:35:12.108002    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:35:12.108014    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:12.120516    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:12.120525    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:12.125523    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:35:12.125529    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:35:12.137192    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:35:12.137203    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:35:12.152420    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:12.152434    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:12.187898    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:35:12.187907    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:35:12.199548    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:12.199557    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:12.242549    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:35:12.242562    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:35:12.261336    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:35:12.261348    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:35:09.852850    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:09.852860    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:09.888957    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:35:09.888968    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:35:09.907048    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:35:09.907060    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:35:09.922592    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:35:09.922607    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:35:09.934202    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:35:09.934212    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:35:09.948681    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:35:09.948692    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:35:09.968773    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:35:09.968783    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:35:09.979803    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:35:09.979813    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:09.991771    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:35:09.991780    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:35:09.991806    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:35:09.991812    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:35:09.991816    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:35:09.991819    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:35:09.991822    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:35:14.775060    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:19.777179    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:19.777414    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:19.794914    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:35:19.795007    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:19.808191    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:35:19.808274    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:19.823179    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:35:19.823252    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:19.833111    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:35:19.833186    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:19.843676    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:35:19.843745    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:19.854830    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:35:19.854897    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:19.865479    8653 logs.go:276] 0 containers: []
	W0731 12:35:19.865491    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:19.865552    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:19.876300    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:35:19.876317    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:35:19.876322    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:35:19.888611    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:19.888624    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:19.925433    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:35:19.925455    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:35:19.940824    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:19.940835    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:19.976512    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:35:19.976524    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:35:19.991056    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:35:19.991068    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:35:20.005982    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:35:20.005994    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:35:20.017615    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:35:20.017626    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:35:20.029807    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:35:20.029823    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:35:20.046109    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:35:20.046120    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:35:20.057445    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:35:20.057457    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:35:20.074765    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:20.074775    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:20.079124    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:20.079133    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:20.101306    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:35:20.101316    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:20.113380    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:35:20.113392    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:35:22.627718    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:19.995592    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:27.629861    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:27.630024    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:27.643102    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:35:27.643188    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:27.656430    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:35:27.656498    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:27.671016    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:35:27.671097    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:27.681293    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:35:27.681363    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:27.691943    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:35:27.692010    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:27.705103    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:35:27.705178    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:27.715576    8653 logs.go:276] 0 containers: []
	W0731 12:35:27.715588    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:27.715649    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:27.725882    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:35:27.725900    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:27.725905    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:27.730496    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:35:27.730503    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:35:27.741963    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:35:27.741975    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:35:27.753688    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:35:27.753702    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:35:27.771315    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:35:27.771328    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:27.783109    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:27.783121    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:27.818160    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:27.818170    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:27.856132    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:35:27.856144    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:35:27.870543    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:35:27.870554    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:35:27.881766    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:35:27.881779    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:35:27.894266    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:35:27.894279    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:35:27.906527    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:35:27.906537    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:35:27.926188    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:35:27.926201    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:35:27.942053    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:35:27.942064    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:35:27.954001    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:27.954015    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:24.997623    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:24.997805    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:25.011800    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:35:25.011887    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:25.023207    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:35:25.023284    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:25.035598    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:35:25.035669    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:25.046374    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:35:25.046449    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:25.056932    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:35:25.057004    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:25.067319    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:35:25.067384    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:25.077991    8642 logs.go:276] 0 containers: []
	W0731 12:35:25.078001    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:25.078063    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:25.088276    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:35:25.088294    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:25.088300    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:35:25.103362    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:35:25.103458    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:35:25.120995    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:25.121002    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:25.125635    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:35:25.125646    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:35:25.157618    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:35:25.157633    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:35:25.180271    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:35:25.180282    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:35:25.193036    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:35:25.193047    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:35:25.204732    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:35:25.204744    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:35:25.219103    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:35:25.219114    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:35:25.236783    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:35:25.236792    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:35:25.248355    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:25.248366    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:25.272725    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:35:25.272737    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:25.284239    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:25.284251    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:25.320106    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:35:25.320117    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:35:25.334332    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:35:25.334343    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:35:25.348454    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:35:25.348465    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:35:25.363862    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:35:25.363877    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:35:25.363907    8642 out.go:239] X Problems detected in kubelet:
	W0731 12:35:25.363935    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:35:25.363942    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:35:25.363947    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:35:25.363950    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:35:30.479977    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:35.482111    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:35.482211    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:35.494985    8653 logs.go:276] 1 containers: [ee0d60a27422]
	I0731 12:35:35.495068    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:35.506331    8653 logs.go:276] 1 containers: [1b732205e616]
	I0731 12:35:35.506423    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:35.517826    8653 logs.go:276] 4 containers: [c68d82c5f847 720f5dd355b1 8c40f8041491 7a912ac1efd8]
	I0731 12:35:35.517919    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:35.529317    8653 logs.go:276] 1 containers: [6f6c25f5dc4c]
	I0731 12:35:35.529391    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:35.540022    8653 logs.go:276] 1 containers: [acab21097160]
	I0731 12:35:35.540088    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:35.550521    8653 logs.go:276] 1 containers: [071d0304665b]
	I0731 12:35:35.550588    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:35.560486    8653 logs.go:276] 0 containers: []
	W0731 12:35:35.560499    8653 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:35.560571    8653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:35.575014    8653 logs.go:276] 1 containers: [94a71bce1887]
	I0731 12:35:35.575030    8653 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:35.575036    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:35.579441    8653 logs.go:123] Gathering logs for kube-proxy [acab21097160] ...
	I0731 12:35:35.579447    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acab21097160"
	I0731 12:35:35.590521    8653 logs.go:123] Gathering logs for storage-provisioner [94a71bce1887] ...
	I0731 12:35:35.590530    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94a71bce1887"
	I0731 12:35:35.602165    8653 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:35.602176    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:35.626290    8653 logs.go:123] Gathering logs for kube-apiserver [ee0d60a27422] ...
	I0731 12:35:35.626302    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee0d60a27422"
	I0731 12:35:35.640650    8653 logs.go:123] Gathering logs for coredns [720f5dd355b1] ...
	I0731 12:35:35.640661    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 720f5dd355b1"
	I0731 12:35:35.655047    8653 logs.go:123] Gathering logs for coredns [8c40f8041491] ...
	I0731 12:35:35.655059    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c40f8041491"
	I0731 12:35:35.666438    8653 logs.go:123] Gathering logs for kube-scheduler [6f6c25f5dc4c] ...
	I0731 12:35:35.666448    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f6c25f5dc4c"
	I0731 12:35:35.680677    8653 logs.go:123] Gathering logs for container status ...
	I0731 12:35:35.680690    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:35.694045    8653 logs.go:123] Gathering logs for coredns [7a912ac1efd8] ...
	I0731 12:35:35.694059    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a912ac1efd8"
	I0731 12:35:35.706288    8653 logs.go:123] Gathering logs for kube-controller-manager [071d0304665b] ...
	I0731 12:35:35.706302    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 071d0304665b"
	I0731 12:35:35.724544    8653 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:35.724556    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:35:35.758428    8653 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:35.758436    8653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:35.794384    8653 logs.go:123] Gathering logs for etcd [1b732205e616] ...
	I0731 12:35:35.794397    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b732205e616"
	I0731 12:35:35.808492    8653 logs.go:123] Gathering logs for coredns [c68d82c5f847] ...
	I0731 12:35:35.808503    8653 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c68d82c5f847"
	I0731 12:35:38.322522    8653 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:35.367030    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:40.369191    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:40.373004    8642 out.go:177] 
	W0731 12:35:40.376053    8642 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 12:35:40.376063    8642 out.go:239] * 
	W0731 12:35:40.376887    8642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:35:40.389055    8642 out.go:177] 
	I0731 12:35:43.324625    8653 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:43.330189    8653 out.go:177] 
	W0731 12:35:43.334182    8653 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 12:35:43.334192    8653 out.go:239] * 
	W0731 12:35:43.334940    8653 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:35:43.348451    8653 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-07-31 19:26:26 UTC, ends at Wed 2024-07-31 19:35:59 UTC. --
	Jul 31 19:35:44 running-upgrade-992000 dockerd[4227]: time="2024-07-31T19:35:44.151049265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 19:35:44 running-upgrade-992000 dockerd[4227]: time="2024-07-31T19:35:44.151061306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 19:35:44 running-upgrade-992000 dockerd[4227]: time="2024-07-31T19:35:44.151131180Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9e0007c7d58b898a5dd6a18db45aca60efdbd6e219af52cfb0ac4c350bfd4100 pid=20374 runtime=io.containerd.runc.v2
	Jul 31 19:35:44 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:44Z" level=error msg="ContainerStats resp: {0x40007ef3c0 linux}"
	Jul 31 19:35:44 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:44Z" level=error msg="ContainerStats resp: {0x40007c3780 linux}"
	Jul 31 19:35:45 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:45Z" level=error msg="ContainerStats resp: {0x4000759880 linux}"
	Jul 31 19:35:45 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:45Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 19:35:46 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:46Z" level=error msg="ContainerStats resp: {0x400028b580 linux}"
	Jul 31 19:35:46 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:46Z" level=error msg="ContainerStats resp: {0x400028ba00 linux}"
	Jul 31 19:35:46 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:46Z" level=error msg="ContainerStats resp: {0x4000a1f3c0 linux}"
	Jul 31 19:35:46 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:46Z" level=error msg="ContainerStats resp: {0x4000a1f840 linux}"
	Jul 31 19:35:46 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:46Z" level=error msg="ContainerStats resp: {0x40007c2480 linux}"
	Jul 31 19:35:46 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:46Z" level=error msg="ContainerStats resp: {0x40007ee4c0 linux}"
	Jul 31 19:35:50 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:50Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 19:35:55 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:55Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 19:35:56 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:56Z" level=error msg="ContainerStats resp: {0x40007588c0 linux}"
	Jul 31 19:35:56 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:56Z" level=error msg="ContainerStats resp: {0x4000758a40 linux}"
	Jul 31 19:35:57 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:57Z" level=error msg="ContainerStats resp: {0x400028b480 linux}"
	Jul 31 19:35:58 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:58Z" level=error msg="ContainerStats resp: {0x40007c2500 linux}"
	Jul 31 19:35:58 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:58Z" level=error msg="ContainerStats resp: {0x40008ff800 linux}"
	Jul 31 19:35:58 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:58Z" level=error msg="ContainerStats resp: {0x40008ffc00 linux}"
	Jul 31 19:35:58 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:58Z" level=error msg="ContainerStats resp: {0x40007c3100 linux}"
	Jul 31 19:35:58 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:58Z" level=error msg="ContainerStats resp: {0x40007ee4c0 linux}"
	Jul 31 19:35:58 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:58Z" level=error msg="ContainerStats resp: {0x40007ee880 linux}"
	Jul 31 19:35:58 running-upgrade-992000 cri-dockerd[3948]: time="2024-07-31T19:35:58Z" level=error msg="ContainerStats resp: {0x40004e6200 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	cc9e935c2be81       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   cf6337af6c2a5
	9e0007c7d58b8       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   a15dfb5912b65
	c68d82c5f847d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   a15dfb5912b65
	720f5dd355b1f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   cf6337af6c2a5
	acab21097160a       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   e30c1469063ea
	94a71bce18875       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   a04bc0e94d905
	071d0304665b0       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   5446f16a43481
	ee0d60a27422b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   ab8d5594ace9d
	1b732205e6160       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   6d480bafe6e9e
	6f6c25f5dc4c0       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   0e4023816e366
	
	
	==> coredns [720f5dd355b1] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8923819972672746036.6551399036177878727. HINFO: read udp 10.244.0.2:40244->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8923819972672746036.6551399036177878727. HINFO: read udp 10.244.0.2:60324->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8923819972672746036.6551399036177878727. HINFO: read udp 10.244.0.2:60213->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8923819972672746036.6551399036177878727. HINFO: read udp 10.244.0.2:60339->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8923819972672746036.6551399036177878727. HINFO: read udp 10.244.0.2:60569->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8923819972672746036.6551399036177878727. HINFO: read udp 10.244.0.2:59554->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8923819972672746036.6551399036177878727. HINFO: read udp 10.244.0.2:53189->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8923819972672746036.6551399036177878727. HINFO: read udp 10.244.0.2:39673->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8923819972672746036.6551399036177878727. HINFO: read udp 10.244.0.2:46067->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8923819972672746036.6551399036177878727. HINFO: read udp 10.244.0.2:54787->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9e0007c7d58b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3738188441365191387.1151731556377526276. HINFO: read udp 10.244.0.3:46308->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3738188441365191387.1151731556377526276. HINFO: read udp 10.244.0.3:40614->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3738188441365191387.1151731556377526276. HINFO: read udp 10.244.0.3:49734->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c68d82c5f847] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9184652846607504075.1254702370456089098. HINFO: read udp 10.244.0.3:53334->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184652846607504075.1254702370456089098. HINFO: read udp 10.244.0.3:58580->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184652846607504075.1254702370456089098. HINFO: read udp 10.244.0.3:59143->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184652846607504075.1254702370456089098. HINFO: read udp 10.244.0.3:49433->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184652846607504075.1254702370456089098. HINFO: read udp 10.244.0.3:57542->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184652846607504075.1254702370456089098. HINFO: read udp 10.244.0.3:45246->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184652846607504075.1254702370456089098. HINFO: read udp 10.244.0.3:41028->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184652846607504075.1254702370456089098. HINFO: read udp 10.244.0.3:55607->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184652846607504075.1254702370456089098. HINFO: read udp 10.244.0.3:53043->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9184652846607504075.1254702370456089098. HINFO: read udp 10.244.0.3:59051->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cc9e935c2be8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7004956921991170887.7739787294269005588. HINFO: read udp 10.244.0.2:39950->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7004956921991170887.7739787294269005588. HINFO: read udp 10.244.0.2:40961->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7004956921991170887.7739787294269005588. HINFO: read udp 10.244.0.2:56724->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-992000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-992000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=running-upgrade-992000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T12_31_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:31:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-992000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:35:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:31:42 +0000   Wed, 31 Jul 2024 19:31:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:31:42 +0000   Wed, 31 Jul 2024 19:31:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:31:42 +0000   Wed, 31 Jul 2024 19:31:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:31:42 +0000   Wed, 31 Jul 2024 19:31:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-992000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa573dfaac0345c3bda990e69cbab194
	  System UUID:                aa573dfaac0345c3bda990e69cbab194
	  Boot ID:                    175b6f99-14fd-4711-ac34-ee06c85888f2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-c9ffl                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-dkx9d                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-992000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-992000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-992000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-nn9n2                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-992000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-992000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-992000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-992000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-992000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-992000 event: Registered Node running-upgrade-992000 in Controller
	
	
	==> dmesg <==
	[  +0.074850] systemd-fstab-generator[892]: Ignoring "noauto" for root device
	[  +0.070886] systemd-fstab-generator[903]: Ignoring "noauto" for root device
	[  +1.141160] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.071630] systemd-fstab-generator[1053]: Ignoring "noauto" for root device
	[  +0.075455] systemd-fstab-generator[1064]: Ignoring "noauto" for root device
	[  +2.976147] systemd-fstab-generator[1292]: Ignoring "noauto" for root device
	[  +9.133717] systemd-fstab-generator[1930]: Ignoring "noauto" for root device
	[Jul31 19:27] kauditd_printk_skb: 47 callbacks suppressed
	[  +1.170610] systemd-fstab-generator[2634]: Ignoring "noauto" for root device
	[  +0.203931] systemd-fstab-generator[2671]: Ignoring "noauto" for root device
	[  +0.113783] systemd-fstab-generator[2688]: Ignoring "noauto" for root device
	[  +0.114971] systemd-fstab-generator[2723]: Ignoring "noauto" for root device
	[  +5.173004] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.460966] systemd-fstab-generator[3905]: Ignoring "noauto" for root device
	[  +0.085111] systemd-fstab-generator[3916]: Ignoring "noauto" for root device
	[  +0.093692] systemd-fstab-generator[3927]: Ignoring "noauto" for root device
	[  +0.097358] systemd-fstab-generator[3941]: Ignoring "noauto" for root device
	[  +2.362616] systemd-fstab-generator[4214]: Ignoring "noauto" for root device
	[  +2.338512] systemd-fstab-generator[4570]: Ignoring "noauto" for root device
	[  +1.315350] systemd-fstab-generator[4713]: Ignoring "noauto" for root device
	[  +1.880400] kauditd_printk_skb: 80 callbacks suppressed
	[ +14.918307] kauditd_printk_skb: 3 callbacks suppressed
	[Jul31 19:31] systemd-fstab-generator[13457]: Ignoring "noauto" for root device
	[  +5.652483] systemd-fstab-generator[14041]: Ignoring "noauto" for root device
	[  +0.471955] systemd-fstab-generator[14174]: Ignoring "noauto" for root device
	
	
	==> etcd [1b732205e616] <==
	{"level":"info","ts":"2024-07-31T19:31:38.374Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T19:31:38.374Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T19:31:38.373Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-31T19:31:38.374Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-31T19:31:38.374Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-31T19:31:38.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-31T19:31:38.374Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-31T19:31:38.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T19:31:38.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T19:31:38.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-31T19:31:38.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T19:31:38.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-31T19:31:38.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-31T19:31:38.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-31T19:31:38.626Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-992000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:31:38.626Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:31:38.626Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:31:38.627Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T19:31:38.631Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:31:38.626Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:31:38.643Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-31T19:31:38.643Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:31:38.643Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:31:38.666Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:31:38.670Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:35:59 up 9 min,  0 users,  load average: 0.54, 0.42, 0.22
	Linux running-upgrade-992000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ee0d60a27422] <==
	I0731 19:31:40.066672       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 19:31:40.066706       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0731 19:31:40.066816       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0731 19:31:40.068601       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0731 19:31:40.069032       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 19:31:40.108027       1 cache.go:39] Caches are synced for autoregister controller
	I0731 19:31:40.116067       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0731 19:31:40.793073       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 19:31:40.982811       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0731 19:31:40.987513       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0731 19:31:40.987552       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 19:31:41.148852       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 19:31:41.158783       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 19:31:41.230136       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0731 19:31:41.232134       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0731 19:31:41.232451       1 controller.go:611] quota admission added evaluator for: endpoints
	I0731 19:31:41.233865       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 19:31:42.117247       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0731 19:31:42.709589       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0731 19:31:42.713053       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0731 19:31:42.717988       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0731 19:31:42.764224       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 19:31:55.424148       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0731 19:31:55.670988       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0731 19:31:56.155413       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [071d0304665b] <==
	I0731 19:31:54.971158       1 shared_informer.go:262] Caches are synced for service account
	I0731 19:31:54.971176       1 shared_informer.go:262] Caches are synced for stateful set
	I0731 19:31:54.972270       1 shared_informer.go:262] Caches are synced for disruption
	I0731 19:31:54.972275       1 disruption.go:371] Sending events to api server.
	I0731 19:31:54.972299       1 shared_informer.go:262] Caches are synced for expand
	I0731 19:31:54.972358       1 shared_informer.go:262] Caches are synced for persistent volume
	I0731 19:31:54.972368       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0731 19:31:55.020901       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0731 19:31:55.023362       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0731 19:31:55.035916       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0731 19:31:55.043206       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0731 19:31:55.044337       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0731 19:31:55.044363       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0731 19:31:55.044381       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0731 19:31:55.120203       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0731 19:31:55.120439       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0731 19:31:55.126165       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 19:31:55.178875       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 19:31:55.426511       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0731 19:31:55.588663       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 19:31:55.640213       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 19:31:55.640318       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 19:31:55.673467       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nn9n2"
	I0731 19:31:55.973596       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-c9ffl"
	I0731 19:31:55.977031       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dkx9d"
	
	
	==> kube-proxy [acab21097160] <==
	I0731 19:31:56.143840       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0731 19:31:56.143879       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0731 19:31:56.143888       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0731 19:31:56.153546       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0731 19:31:56.153558       1 server_others.go:206] "Using iptables Proxier"
	I0731 19:31:56.153574       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0731 19:31:56.153672       1 server.go:661] "Version info" version="v1.24.1"
	I0731 19:31:56.153684       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:31:56.153907       1 config.go:317] "Starting service config controller"
	I0731 19:31:56.153918       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0731 19:31:56.153929       1 config.go:226] "Starting endpoint slice config controller"
	I0731 19:31:56.153940       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0731 19:31:56.154185       1 config.go:444] "Starting node config controller"
	I0731 19:31:56.154196       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0731 19:31:56.254397       1 shared_informer.go:262] Caches are synced for node config
	I0731 19:31:56.254414       1 shared_informer.go:262] Caches are synced for service config
	I0731 19:31:56.254462       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6f6c25f5dc4c] <==
	W0731 19:31:40.023991       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 19:31:40.023995       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 19:31:40.024025       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 19:31:40.024032       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 19:31:40.024045       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 19:31:40.024048       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 19:31:40.024060       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 19:31:40.024063       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 19:31:40.024073       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:31:40.024077       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:31:40.024099       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:31:40.024106       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 19:31:40.024122       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 19:31:40.024129       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 19:31:40.024144       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 19:31:40.024151       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 19:31:40.024190       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 19:31:40.024197       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 19:31:40.024274       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 19:31:40.024286       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 19:31:40.915428       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 19:31:40.915515       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:31:41.056241       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 19:31:41.056331       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0731 19:31:43.523014       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-07-31 19:26:26 UTC, ends at Wed 2024-07-31 19:35:59 UTC. --
	Jul 31 19:31:43 running-upgrade-992000 kubelet[14047]: E0731 19:31:43.140188   14047 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-992000\" already exists" pod="kube-system/etcd-running-upgrade-992000"
	Jul 31 19:31:43 running-upgrade-992000 kubelet[14047]: E0731 19:31:43.350930   14047 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-992000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-992000"
	Jul 31 19:31:43 running-upgrade-992000 kubelet[14047]: I0731 19:31:43.737881   14047 apiserver.go:52] "Watching apiserver"
	Jul 31 19:31:44 running-upgrade-992000 kubelet[14047]: I0731 19:31:44.170478   14047 reconciler.go:157] "Reconciler: start to sync state"
	Jul 31 19:31:44 running-upgrade-992000 kubelet[14047]: E0731 19:31:44.339676   14047 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-992000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-992000"
	Jul 31 19:31:44 running-upgrade-992000 kubelet[14047]: E0731 19:31:44.540216   14047 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-992000\" already exists" pod="kube-system/etcd-running-upgrade-992000"
	Jul 31 19:31:44 running-upgrade-992000 kubelet[14047]: E0731 19:31:44.740436   14047 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-992000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-992000"
	Jul 31 19:31:54 running-upgrade-992000 kubelet[14047]: I0731 19:31:54.933816   14047 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:31:54 running-upgrade-992000 kubelet[14047]: I0731 19:31:54.940623   14047 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 19:31:54 running-upgrade-992000 kubelet[14047]: I0731 19:31:54.940933   14047 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 19:31:54 running-upgrade-992000 kubelet[14047]: I0731 19:31:54.941208   14047 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3087649c-101a-4402-8a2a-deb7385ce020-tmp\") pod \"storage-provisioner\" (UID: \"3087649c-101a-4402-8a2a-deb7385ce020\") " pod="kube-system/storage-provisioner"
	Jul 31 19:31:54 running-upgrade-992000 kubelet[14047]: I0731 19:31:54.941222   14047 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64wgf\" (UniqueName: \"kubernetes.io/projected/3087649c-101a-4402-8a2a-deb7385ce020-kube-api-access-64wgf\") pod \"storage-provisioner\" (UID: \"3087649c-101a-4402-8a2a-deb7385ce020\") " pod="kube-system/storage-provisioner"
	Jul 31 19:31:55 running-upgrade-992000 kubelet[14047]: I0731 19:31:55.676232   14047 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:31:55 running-upgrade-992000 kubelet[14047]: I0731 19:31:55.845330   14047 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k86hc\" (UniqueName: \"kubernetes.io/projected/35d72ea1-426b-4c2d-8038-c4e284723136-kube-api-access-k86hc\") pod \"kube-proxy-nn9n2\" (UID: \"35d72ea1-426b-4c2d-8038-c4e284723136\") " pod="kube-system/kube-proxy-nn9n2"
	Jul 31 19:31:55 running-upgrade-992000 kubelet[14047]: I0731 19:31:55.845359   14047 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35d72ea1-426b-4c2d-8038-c4e284723136-kube-proxy\") pod \"kube-proxy-nn9n2\" (UID: \"35d72ea1-426b-4c2d-8038-c4e284723136\") " pod="kube-system/kube-proxy-nn9n2"
	Jul 31 19:31:55 running-upgrade-992000 kubelet[14047]: I0731 19:31:55.845368   14047 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35d72ea1-426b-4c2d-8038-c4e284723136-xtables-lock\") pod \"kube-proxy-nn9n2\" (UID: \"35d72ea1-426b-4c2d-8038-c4e284723136\") " pod="kube-system/kube-proxy-nn9n2"
	Jul 31 19:31:55 running-upgrade-992000 kubelet[14047]: I0731 19:31:55.845522   14047 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35d72ea1-426b-4c2d-8038-c4e284723136-lib-modules\") pod \"kube-proxy-nn9n2\" (UID: \"35d72ea1-426b-4c2d-8038-c4e284723136\") " pod="kube-system/kube-proxy-nn9n2"
	Jul 31 19:31:55 running-upgrade-992000 kubelet[14047]: I0731 19:31:55.977205   14047 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:31:55 running-upgrade-992000 kubelet[14047]: I0731 19:31:55.980118   14047 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:31:56 running-upgrade-992000 kubelet[14047]: I0731 19:31:56.146180   14047 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqtvr\" (UniqueName: \"kubernetes.io/projected/f760aa5f-7016-47c0-b4f5-c6f9fb9e880e-kube-api-access-qqtvr\") pod \"coredns-6d4b75cb6d-c9ffl\" (UID: \"f760aa5f-7016-47c0-b4f5-c6f9fb9e880e\") " pod="kube-system/coredns-6d4b75cb6d-c9ffl"
	Jul 31 19:31:56 running-upgrade-992000 kubelet[14047]: I0731 19:31:56.146206   14047 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d17e02d-a6d9-49ac-9c9c-59fc42e376f7-config-volume\") pod \"coredns-6d4b75cb6d-dkx9d\" (UID: \"0d17e02d-a6d9-49ac-9c9c-59fc42e376f7\") " pod="kube-system/coredns-6d4b75cb6d-dkx9d"
	Jul 31 19:31:56 running-upgrade-992000 kubelet[14047]: I0731 19:31:56.146219   14047 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f760aa5f-7016-47c0-b4f5-c6f9fb9e880e-config-volume\") pod \"coredns-6d4b75cb6d-c9ffl\" (UID: \"f760aa5f-7016-47c0-b4f5-c6f9fb9e880e\") " pod="kube-system/coredns-6d4b75cb6d-c9ffl"
	Jul 31 19:31:56 running-upgrade-992000 kubelet[14047]: I0731 19:31:56.146231   14047 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2vrx\" (UniqueName: \"kubernetes.io/projected/0d17e02d-a6d9-49ac-9c9c-59fc42e376f7-kube-api-access-f2vrx\") pod \"coredns-6d4b75cb6d-dkx9d\" (UID: \"0d17e02d-a6d9-49ac-9c9c-59fc42e376f7\") " pod="kube-system/coredns-6d4b75cb6d-dkx9d"
	Jul 31 19:35:44 running-upgrade-992000 kubelet[14047]: I0731 19:35:44.820248   14047 scope.go:110] "RemoveContainer" containerID="7a912ac1efd83bed6d0bb3df303f02caf02c3efeb2d39ef64e6fff1d421604fb"
	Jul 31 19:35:44 running-upgrade-992000 kubelet[14047]: I0731 19:35:44.831032   14047 scope.go:110] "RemoveContainer" containerID="8c40f804149131d3437bce1e856b79f2f5bb8cf06503dc2e6f99defa7d99f794"
	
	
	==> storage-provisioner [94a71bce1887] <==
	I0731 19:31:55.438967       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 19:31:55.442470       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 19:31:55.442484       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 19:31:55.445507       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 19:31:55.445655       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7255b907-6cea-4090-a925-ec5f5b6eabb3", APIVersion:"v1", ResourceVersion:"319", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-992000_f65d53c5-e188-429a-a5a2-b2e4333d2a35 became leader
	I0731 19:31:55.445687       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-992000_f65d53c5-e188-429a-a5a2-b2e4333d2a35!
	I0731 19:31:55.546361       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-992000_f65d53c5-e188-429a-a5a2-b2e4333d2a35!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-992000 -n running-upgrade-992000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-992000 -n running-upgrade-992000: exit status 2 (15.5386075s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-992000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-992000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-992000
--- FAIL: TestRunningBinaryUpgrade (625.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-389000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-389000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.022922041s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-389000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-389000" primary control-plane node in "kubernetes-upgrade-389000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-389000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:25:32.250238    8547 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:25:32.250350    8547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:32.250354    8547 out.go:304] Setting ErrFile to fd 2...
	I0731 12:25:32.250356    8547 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:32.250500    8547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:25:32.251557    8547 out.go:298] Setting JSON to false
	I0731 12:25:32.267611    8547 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5095,"bootTime":1722448837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:25:32.267671    8547 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:25:32.272330    8547 out.go:177] * [kubernetes-upgrade-389000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:25:32.279382    8547 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:25:32.279426    8547 notify.go:220] Checking for updates...
	I0731 12:25:32.287329    8547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:25:32.288681    8547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:25:32.291300    8547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:25:32.294313    8547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:25:32.297349    8547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:25:32.300568    8547 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:25:32.300632    8547 config.go:182] Loaded profile config "offline-docker-917000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:25:32.300675    8547 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:25:32.303305    8547 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:25:32.310324    8547 start.go:297] selected driver: qemu2
	I0731 12:25:32.310333    8547 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:25:32.310340    8547 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:25:32.312495    8547 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:25:32.315299    8547 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:25:32.319434    8547 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:25:32.319455    8547 cni.go:84] Creating CNI manager for ""
	I0731 12:25:32.319463    8547 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:25:32.319512    8547 start.go:340] cluster config:
	{Name:kubernetes-upgrade-389000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-389000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:25:32.323141    8547 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:32.331351    8547 out.go:177] * Starting "kubernetes-upgrade-389000" primary control-plane node in "kubernetes-upgrade-389000" cluster
	I0731 12:25:32.335313    8547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:25:32.335330    8547 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:25:32.335345    8547 cache.go:56] Caching tarball of preloaded images
	I0731 12:25:32.335414    8547 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:25:32.335420    8547 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:25:32.335481    8547 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/kubernetes-upgrade-389000/config.json ...
	I0731 12:25:32.335492    8547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/kubernetes-upgrade-389000/config.json: {Name:mk1f2123a4422d1c322d6b363a3547a303b877c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:25:32.335849    8547 start.go:360] acquireMachinesLock for kubernetes-upgrade-389000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:25:32.345711    8547 start.go:364] duration metric: took 9.853542ms to acquireMachinesLock for "kubernetes-upgrade-389000"
	I0731 12:25:32.345728    8547 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-389000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:25:32.345771    8547 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:25:32.350391    8547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:25:32.368766    8547 start.go:159] libmachine.API.Create for "kubernetes-upgrade-389000" (driver="qemu2")
	I0731 12:25:32.368791    8547 client.go:168] LocalClient.Create starting
	I0731 12:25:32.368869    8547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:25:32.368903    8547 main.go:141] libmachine: Decoding PEM data...
	I0731 12:25:32.368917    8547 main.go:141] libmachine: Parsing certificate...
	I0731 12:25:32.368959    8547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:25:32.368985    8547 main.go:141] libmachine: Decoding PEM data...
	I0731 12:25:32.368997    8547 main.go:141] libmachine: Parsing certificate...
	I0731 12:25:32.369348    8547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:25:32.654502    8547 main.go:141] libmachine: Creating SSH key...
	I0731 12:25:32.761487    8547 main.go:141] libmachine: Creating Disk image...
	I0731 12:25:32.761492    8547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:25:32.761663    8547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2
	I0731 12:25:32.771401    8547 main.go:141] libmachine: STDOUT: 
	I0731 12:25:32.771420    8547 main.go:141] libmachine: STDERR: 
	I0731 12:25:32.771479    8547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2 +20000M
	I0731 12:25:32.779458    8547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:25:32.779474    8547 main.go:141] libmachine: STDERR: 
	I0731 12:25:32.779486    8547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2
	I0731 12:25:32.779490    8547 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:25:32.779498    8547 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:25:32.779526    8547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:04:2d:9b:67:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2
	I0731 12:25:32.781151    8547 main.go:141] libmachine: STDOUT: 
	I0731 12:25:32.781163    8547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:25:32.781191    8547 client.go:171] duration metric: took 412.39875ms to LocalClient.Create
	I0731 12:25:34.783296    8547 start.go:128] duration metric: took 2.437585958s to createHost
	I0731 12:25:34.783355    8547 start.go:83] releasing machines lock for "kubernetes-upgrade-389000", held for 2.437716958s
	W0731 12:25:34.783466    8547 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:34.798455    8547 out.go:177] * Deleting "kubernetes-upgrade-389000" in qemu2 ...
	W0731 12:25:34.836890    8547 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:34.836913    8547 start.go:729] Will try again in 5 seconds ...
	I0731 12:25:39.838958    8547 start.go:360] acquireMachinesLock for kubernetes-upgrade-389000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:25:39.839157    8547 start.go:364] duration metric: took 139.667µs to acquireMachinesLock for "kubernetes-upgrade-389000"
	I0731 12:25:39.839202    8547 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-389000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:25:39.839281    8547 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:25:39.846975    8547 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:25:39.870386    8547 start.go:159] libmachine.API.Create for "kubernetes-upgrade-389000" (driver="qemu2")
	I0731 12:25:39.870417    8547 client.go:168] LocalClient.Create starting
	I0731 12:25:39.870486    8547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:25:39.870518    8547 main.go:141] libmachine: Decoding PEM data...
	I0731 12:25:39.870530    8547 main.go:141] libmachine: Parsing certificate...
	I0731 12:25:39.870573    8547 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:25:39.870592    8547 main.go:141] libmachine: Decoding PEM data...
	I0731 12:25:39.870602    8547 main.go:141] libmachine: Parsing certificate...
	I0731 12:25:39.870957    8547 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:25:40.065894    8547 main.go:141] libmachine: Creating SSH key...
	I0731 12:25:40.193586    8547 main.go:141] libmachine: Creating Disk image...
	I0731 12:25:40.193593    8547 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:25:40.193778    8547 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2
	I0731 12:25:40.203294    8547 main.go:141] libmachine: STDOUT: 
	I0731 12:25:40.203316    8547 main.go:141] libmachine: STDERR: 
	I0731 12:25:40.203391    8547 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2 +20000M
	I0731 12:25:40.211826    8547 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:25:40.211848    8547 main.go:141] libmachine: STDERR: 
	I0731 12:25:40.211868    8547 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2
	I0731 12:25:40.211873    8547 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:25:40.211886    8547 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:25:40.211920    8547 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:34:82:18:32:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2
	I0731 12:25:40.213655    8547 main.go:141] libmachine: STDOUT: 
	I0731 12:25:40.213673    8547 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:25:40.213688    8547 client.go:171] duration metric: took 343.278333ms to LocalClient.Create
	I0731 12:25:42.213995    8547 start.go:128] duration metric: took 2.374725375s to createHost
	I0731 12:25:42.214029    8547 start.go:83] releasing machines lock for "kubernetes-upgrade-389000", held for 2.374941208s
	W0731 12:25:42.214208    8547 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-389000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-389000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:42.222985    8547 out.go:177] 
	W0731 12:25:42.225982    8547 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:25:42.225994    8547 out.go:239] * 
	* 
	W0731 12:25:42.227186    8547 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:25:42.237030    8547 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-389000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-389000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-389000: (3.177098s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-389000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-389000 status --format={{.Host}}: exit status 7 (32.258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-389000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-389000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.194794542s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-389000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-389000" primary control-plane node in "kubernetes-upgrade-389000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-389000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-389000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:25:45.486878    8599 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:25:45.487043    8599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:45.487047    8599 out.go:304] Setting ErrFile to fd 2...
	I0731 12:25:45.487049    8599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:25:45.487196    8599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:25:45.488302    8599 out.go:298] Setting JSON to false
	I0731 12:25:45.506700    8599 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5108,"bootTime":1722448837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:25:45.506772    8599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:25:45.511534    8599 out.go:177] * [kubernetes-upgrade-389000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:25:45.519573    8599 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:25:45.519600    8599 notify.go:220] Checking for updates...
	I0731 12:25:45.525536    8599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:25:45.528503    8599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:25:45.531535    8599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:25:45.534528    8599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:25:45.537540    8599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:25:45.540823    8599 config.go:182] Loaded profile config "kubernetes-upgrade-389000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 12:25:45.541096    8599 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:25:45.545379    8599 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:25:45.552494    8599 start.go:297] selected driver: qemu2
	I0731 12:25:45.552505    8599 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-389000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:25:45.552580    8599 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:25:45.554933    8599 cni.go:84] Creating CNI manager for ""
	I0731 12:25:45.554948    8599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:25:45.554972    8599 start.go:340] cluster config:
	{Name:kubernetes-upgrade-389000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-389000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:25:45.558295    8599 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:25:45.571531    8599 out.go:177] * Starting "kubernetes-upgrade-389000" primary control-plane node in "kubernetes-upgrade-389000" cluster
	I0731 12:25:45.574379    8599 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:25:45.574398    8599 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:25:45.574413    8599 cache.go:56] Caching tarball of preloaded images
	I0731 12:25:45.574461    8599 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:25:45.574466    8599 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 12:25:45.574510    8599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/kubernetes-upgrade-389000/config.json ...
	I0731 12:25:45.574873    8599 start.go:360] acquireMachinesLock for kubernetes-upgrade-389000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:25:45.574905    8599 start.go:364] duration metric: took 25.541µs to acquireMachinesLock for "kubernetes-upgrade-389000"
	I0731 12:25:45.574915    8599 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:25:45.574922    8599 fix.go:54] fixHost starting: 
	I0731 12:25:45.575025    8599 fix.go:112] recreateIfNeeded on kubernetes-upgrade-389000: state=Stopped err=<nil>
	W0731 12:25:45.575033    8599 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:25:45.579500    8599 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-389000" ...
	I0731 12:25:45.586507    8599 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:25:45.586549    8599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:34:82:18:32:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2
	I0731 12:25:45.588444    8599 main.go:141] libmachine: STDOUT: 
	I0731 12:25:45.588464    8599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:25:45.588490    8599 fix.go:56] duration metric: took 13.5695ms for fixHost
	I0731 12:25:45.588494    8599 start.go:83] releasing machines lock for "kubernetes-upgrade-389000", held for 13.585875ms
	W0731 12:25:45.588502    8599 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:25:45.588536    8599 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:45.588540    8599 start.go:729] Will try again in 5 seconds ...
	I0731 12:25:50.589595    8599 start.go:360] acquireMachinesLock for kubernetes-upgrade-389000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:25:50.590106    8599 start.go:364] duration metric: took 397.334µs to acquireMachinesLock for "kubernetes-upgrade-389000"
	I0731 12:25:50.590251    8599 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:25:50.590273    8599 fix.go:54] fixHost starting: 
	I0731 12:25:50.591031    8599 fix.go:112] recreateIfNeeded on kubernetes-upgrade-389000: state=Stopped err=<nil>
	W0731 12:25:50.591060    8599 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:25:50.596325    8599 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-389000" ...
	I0731 12:25:50.606444    8599 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:25:50.606714    8599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:34:82:18:32:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubernetes-upgrade-389000/disk.qcow2
	I0731 12:25:50.616898    8599 main.go:141] libmachine: STDOUT: 
	I0731 12:25:50.616976    8599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:25:50.617070    8599 fix.go:56] duration metric: took 26.800333ms for fixHost
	I0731 12:25:50.617091    8599 start.go:83] releasing machines lock for "kubernetes-upgrade-389000", held for 26.959917ms
	W0731 12:25:50.617323    8599 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-389000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-389000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:25:50.624257    8599 out.go:177] 
	W0731 12:25:50.628363    8599 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:25:50.628398    8599 out.go:239] * 
	* 
	W0731 12:25:50.630385    8599 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:25:50.640220    8599 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-389000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-389000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-389000 version --output=json: exit status 1 (61.154666ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-389000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-31 12:25:50.715213 -0700 PDT m=+714.502915542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-389000 -n kubernetes-upgrade-389000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-389000 -n kubernetes-upgrade-389000: exit status 7 (32.960375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-389000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-389000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-389000
--- FAIL: TestKubernetesUpgrade (18.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (599.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1852384083 start -p stopped-upgrade-155000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1852384083 start -p stopped-upgrade-155000 --memory=2200 --vm-driver=qemu2 : (51.888653875s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1852384083 -p stopped-upgrade-155000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1852384083 -p stopped-upgrade-155000 stop: (12.106605834s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-155000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-155000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m55.607808667s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-155000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-155000" primary control-plane node in "stopped-upgrade-155000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-155000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:26:44.864364    8642 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:26:44.864518    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:44.864523    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:26:44.864526    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:26:44.864669    8642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:26:44.865918    8642 out.go:298] Setting JSON to false
	I0731 12:26:44.885394    8642 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5167,"bootTime":1722448837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:26:44.885477    8642 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:26:44.890845    8642 out.go:177] * [stopped-upgrade-155000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:26:44.898980    8642 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:26:44.899026    8642 notify.go:220] Checking for updates...
	I0731 12:26:44.904852    8642 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:26:44.907868    8642 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:26:44.910881    8642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:26:44.912271    8642 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:26:44.914808    8642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:26:44.918133    8642 config.go:182] Loaded profile config "stopped-upgrade-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:26:44.921832    8642 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:26:44.924823    8642 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:26:44.928891    8642 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:26:44.935828    8642 start.go:297] selected driver: qemu2
	I0731 12:26:44.935839    8642 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51260 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:26:44.935920    8642 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:26:44.938916    8642 cni.go:84] Creating CNI manager for ""
	I0731 12:26:44.938933    8642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:26:44.938961    8642 start.go:340] cluster config:
	{Name:stopped-upgrade-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51260 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:26:44.939019    8642 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:26:44.946827    8642 out.go:177] * Starting "stopped-upgrade-155000" primary control-plane node in "stopped-upgrade-155000" cluster
	I0731 12:26:44.950827    8642 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:26:44.950844    8642 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 12:26:44.950852    8642 cache.go:56] Caching tarball of preloaded images
	I0731 12:26:44.950911    8642 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:26:44.950916    8642 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 12:26:44.950973    8642 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/config.json ...
	I0731 12:26:44.951371    8642 start.go:360] acquireMachinesLock for stopped-upgrade-155000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:26:44.951403    8642 start.go:364] duration metric: took 26.5µs to acquireMachinesLock for "stopped-upgrade-155000"
	I0731 12:26:44.951411    8642 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:26:44.951415    8642 fix.go:54] fixHost starting: 
	I0731 12:26:44.951523    8642 fix.go:112] recreateIfNeeded on stopped-upgrade-155000: state=Stopped err=<nil>
	W0731 12:26:44.951530    8642 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:26:44.955778    8642 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-155000" ...
	I0731 12:26:44.963806    8642 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:26:44.963872    8642 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51225-:22,hostfwd=tcp::51226-:2376,hostname=stopped-upgrade-155000 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/disk.qcow2
	I0731 12:26:45.010890    8642 main.go:141] libmachine: STDOUT: 
	I0731 12:26:45.010917    8642 main.go:141] libmachine: STDERR: 
	I0731 12:26:45.010923    8642 main.go:141] libmachine: Waiting for VM to start (ssh -p 51225 docker@127.0.0.1)...
	I0731 12:27:05.370841    8642 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/config.json ...
	I0731 12:27:05.371229    8642 machine.go:94] provisionDockerMachine start ...
	I0731 12:27:05.371376    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.371619    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.371627    8642 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 12:27:05.441874    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 12:27:05.441890    8642 buildroot.go:166] provisioning hostname "stopped-upgrade-155000"
	I0731 12:27:05.441958    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.442086    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.442094    8642 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-155000 && echo "stopped-upgrade-155000" | sudo tee /etc/hostname
	I0731 12:27:05.509632    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-155000
	
	I0731 12:27:05.509687    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.509807    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.509815    8642 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-155000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-155000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-155000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:27:05.575029    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:27:05.575043    8642 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19355-6494/.minikube CaCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19355-6494/.minikube}
	I0731 12:27:05.575054    8642 buildroot.go:174] setting up certificates
	I0731 12:27:05.575058    8642 provision.go:84] configureAuth start
	I0731 12:27:05.575066    8642 provision.go:143] copyHostCerts
	I0731 12:27:05.575159    8642 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem, removing ...
	I0731 12:27:05.575167    8642 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem
	I0731 12:27:05.575277    8642 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.pem (1082 bytes)
	I0731 12:27:05.575473    8642 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem, removing ...
	I0731 12:27:05.575478    8642 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem
	I0731 12:27:05.575532    8642 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/cert.pem (1123 bytes)
	I0731 12:27:05.575640    8642 exec_runner.go:144] found /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem, removing ...
	I0731 12:27:05.575645    8642 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem
	I0731 12:27:05.575690    8642 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19355-6494/.minikube/key.pem (1675 bytes)
	I0731 12:27:05.575771    8642 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-155000 san=[127.0.0.1 localhost minikube stopped-upgrade-155000]
	I0731 12:27:05.652724    8642 provision.go:177] copyRemoteCerts
	I0731 12:27:05.652778    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:27:05.652787    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	I0731 12:27:05.688257    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:27:05.694772    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:27:05.701514    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 12:27:05.709029    8642 provision.go:87] duration metric: took 133.9695ms to configureAuth
	I0731 12:27:05.709041    8642 buildroot.go:189] setting minikube options for container-runtime
	I0731 12:27:05.709181    8642 config.go:182] Loaded profile config "stopped-upgrade-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:27:05.709215    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.709301    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.709307    8642 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 12:27:05.774864    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 12:27:05.774875    8642 buildroot.go:70] root file system type: tmpfs
	I0731 12:27:05.774933    8642 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 12:27:05.774991    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.775161    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.775204    8642 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 12:27:05.842812    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 12:27:05.842884    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:05.843012    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:05.843021    8642 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 12:27:06.231997    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 12:27:06.232010    8642 machine.go:97] duration metric: took 860.803ms to provisionDockerMachine
	I0731 12:27:06.232017    8642 start.go:293] postStartSetup for "stopped-upgrade-155000" (driver="qemu2")
	I0731 12:27:06.232025    8642 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:27:06.232137    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:27:06.232151    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	I0731 12:27:06.268394    8642 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:27:06.269813    8642 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 12:27:06.269821    8642 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-6494/.minikube/addons for local assets ...
	I0731 12:27:06.269910    8642 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19355-6494/.minikube/files for local assets ...
	I0731 12:27:06.270030    8642 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0731 12:27:06.270157    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:27:06.272964    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0731 12:27:06.280067    8642 start.go:296] duration metric: took 48.043041ms for postStartSetup
	I0731 12:27:06.280086    8642 fix.go:56] duration metric: took 21.329396584s for fixHost
	I0731 12:27:06.280150    8642 main.go:141] libmachine: Using SSH client type: native
	I0731 12:27:06.280276    8642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100ee6a10] 0x100ee9270 <nil>  [] 0s} localhost 51225 <nil> <nil>}
	I0731 12:27:06.280282    8642 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 12:27:06.346263    8642 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454026.563938463
	
	I0731 12:27:06.346273    8642 fix.go:216] guest clock: 1722454026.563938463
	I0731 12:27:06.346277    8642 fix.go:229] Guest: 2024-07-31 12:27:06.563938463 -0700 PDT Remote: 2024-07-31 12:27:06.280088 -0700 PDT m=+21.447041084 (delta=283.850463ms)
	I0731 12:27:06.346295    8642 fix.go:200] guest clock delta is within tolerance: 283.850463ms
	I0731 12:27:06.346297    8642 start.go:83] releasing machines lock for "stopped-upgrade-155000", held for 21.395618167s
	I0731 12:27:06.346378    8642 ssh_runner.go:195] Run: cat /version.json
	I0731 12:27:06.346387    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	I0731 12:27:06.346398    8642 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:27:06.346415    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	W0731 12:27:06.347145    8642 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51411->127.0.0.1:51225: write: broken pipe
	I0731 12:27:06.347163    8642 retry.go:31] will retry after 230.602924ms: ssh: handshake failed: write tcp 127.0.0.1:51411->127.0.0.1:51225: write: broken pipe
	W0731 12:27:06.610590    8642 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:27:06.610673    8642 ssh_runner.go:195] Run: systemctl --version
	I0731 12:27:06.612579    8642 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 12:27:06.614309    8642 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 12:27:06.614336    8642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 12:27:06.617222    8642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 12:27:06.622899    8642 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:27:06.622910    8642 start.go:495] detecting cgroup driver to use...
	I0731 12:27:06.623030    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:27:06.629951    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 12:27:06.633173    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 12:27:06.636479    8642 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 12:27:06.636507    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 12:27:06.639958    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:27:06.643683    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 12:27:06.646818    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:27:06.649751    8642 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:27:06.653181    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 12:27:06.656332    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 12:27:06.659299    8642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 12:27:06.662213    8642 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:27:06.665100    8642 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:27:06.668423    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:06.747317    8642 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 12:27:06.754486    8642 start.go:495] detecting cgroup driver to use...
	I0731 12:27:06.754554    8642 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 12:27:06.759995    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:27:06.768412    8642 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:27:06.774752    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:27:06.779285    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:27:06.783298    8642 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 12:27:06.827597    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:27:06.832681    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:27:06.838125    8642 ssh_runner.go:195] Run: which cri-dockerd
	I0731 12:27:06.839283    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 12:27:06.841640    8642 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 12:27:06.846681    8642 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 12:27:06.925906    8642 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 12:27:07.008884    8642 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 12:27:07.008956    8642 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 12:27:07.014533    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:07.091530    8642 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:27:08.220399    8642 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.128891709s)
	I0731 12:27:08.220465    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 12:27:08.224770    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:27:08.229043    8642 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 12:27:08.311674    8642 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 12:27:08.386850    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:08.450417    8642 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 12:27:08.455850    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:27:08.460171    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:08.540606    8642 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 12:27:08.580368    8642 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 12:27:08.580449    8642 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 12:27:08.583306    8642 start.go:563] Will wait 60s for crictl version
	I0731 12:27:08.583359    8642 ssh_runner.go:195] Run: which crictl
	I0731 12:27:08.585147    8642 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:27:08.601716    8642 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 12:27:08.601804    8642 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:27:08.619889    8642 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:27:08.640985    8642 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 12:27:08.641053    8642 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 12:27:08.642256    8642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:27:08.645764    8642 kubeadm.go:883] updating cluster {Name:stopped-upgrade-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51260 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 12:27:08.645811    8642 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:27:08.645853    8642 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:27:08.660494    8642 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:27:08.660503    8642 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:27:08.660548    8642 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:27:08.664016    8642 ssh_runner.go:195] Run: which lz4
	I0731 12:27:08.665247    8642 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 12:27:08.666488    8642 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 12:27:08.666499    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 12:27:09.653303    8642 docker.go:649] duration metric: took 988.116708ms to copy over tarball
	I0731 12:27:09.653363    8642 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 12:27:10.865967    8642 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.2126325s)
	I0731 12:27:10.865980    8642 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 12:27:10.882001    8642 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:27:10.885148    8642 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 12:27:10.890498    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:10.974508    8642 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:27:12.270803    8642 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.296309208s)
	I0731 12:27:12.270891    8642 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:27:12.289906    8642 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:27:12.289915    8642 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:27:12.289920    8642 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 12:27:12.294606    8642 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:12.296542    8642 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:12.298552    8642 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:12.298884    8642 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:27:12.300399    8642 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:12.300772    8642 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:12.302119    8642 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:12.302526    8642 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:27:12.303686    8642 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:12.303768    8642 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:12.305222    8642 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:12.305325    8642 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:12.306290    8642 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:12.306318    8642 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:12.307719    8642 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:12.308236    8642 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:12.683377    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 12:27:12.694305    8642 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 12:27:12.694343    8642 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 12:27:12.694407    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 12:27:12.703964    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:27:12.704074    8642 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0731 12:27:12.706408    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:12.706527    8642 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 12:27:12.706537    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 12:27:12.710723    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:12.714434    8642 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 12:27:12.714442    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 12:27:12.726111    8642 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 12:27:12.726131    8642 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:12.726187    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:27:12.748983    8642 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 12:27:12.749005    8642 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:27:12.749063    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0731 12:27:12.753027    8642 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:27:12.753271    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:12.753904    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:12.763790    8642 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 12:27:12.763838    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 12:27:12.767724    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:27:12.767886    8642 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:27:12.768612    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:12.783832    8642 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 12:27:12.783841    8642 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 12:27:12.783856    8642 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 12:27:12.783856    8642 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:12.783882    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 12:27:12.783900    8642 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:12.783944    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:27:12.783948    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:27:12.795297    8642 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 12:27:12.795316    8642 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:12.795374    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:27:12.822526    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:12.824897    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 12:27:12.824932    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:27:12.824937    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 12:27:12.825043    8642 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:27:12.880045    8642 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 12:27:12.880052    8642 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 12:27:12.880063    8642 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:12.880085    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 12:27:12.880123    8642 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:27:12.913304    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0731 12:27:12.961614    8642 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:27:12.961829    8642 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:12.983502    8642 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:27:12.983518    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 12:27:12.997596    8642 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 12:27:12.997621    8642 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:12.997682    8642 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:27:13.103894    8642 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 12:27:13.103932    8642 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 12:27:13.104053    8642 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:27:13.108398    8642 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 12:27:13.108430    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 12:27:13.109131    8642 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:27:13.109138    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 12:27:13.301305    8642 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 12:27:13.301327    8642 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:27:13.301336    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 12:27:13.532995    8642 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 12:27:13.533036    8642 cache_images.go:92] duration metric: took 1.243151125s to LoadCachedImages
	W0731 12:27:13.533078    8642 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0731 12:27:13.533084    8642 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 12:27:13.533146    8642 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-155000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 12:27:13.533220    8642 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 12:27:13.546946    8642 cni.go:84] Creating CNI manager for ""
	I0731 12:27:13.546959    8642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:27:13.546965    8642 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 12:27:13.546974    8642 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-155000 NodeName:stopped-upgrade-155000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:27:13.547062    8642 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-155000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:27:13.547119    8642 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 12:27:13.550106    8642 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:27:13.550139    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:27:13.552699    8642 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 12:27:13.557696    8642 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:27:13.562601    8642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 12:27:13.567933    8642 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 12:27:13.569353    8642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:27:13.572800    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:27:13.646887    8642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:27:13.655181    8642 certs.go:68] Setting up /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000 for IP: 10.0.2.15
	I0731 12:27:13.655191    8642 certs.go:194] generating shared ca certs ...
	I0731 12:27:13.655199    8642 certs.go:226] acquiring lock for ca certs: {Name:mkb4a8d4a0878ee693c06454a581729087f6c431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:13.655472    8642 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.key
	I0731 12:27:13.655524    8642 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.key
	I0731 12:27:13.655529    8642 certs.go:256] generating profile certs ...
	I0731 12:27:13.655607    8642 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/client.key
	I0731 12:27:13.655621    8642 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key.4262e312
	I0731 12:27:13.655632    8642 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt.4262e312 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 12:27:13.719761    8642 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt.4262e312 ...
	I0731 12:27:13.719774    8642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt.4262e312: {Name:mk8adc6a1fe5d527ecfd0ab44b37d59a5c29a31b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:13.720098    8642 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key.4262e312 ...
	I0731 12:27:13.720103    8642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key.4262e312: {Name:mk84174f961cf12d935f29b3e4833b564d681dcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:13.720239    8642 certs.go:381] copying /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt.4262e312 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt
	I0731 12:27:13.720360    8642 certs.go:385] copying /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key.4262e312 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key
	I0731 12:27:13.720512    8642 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/proxy-client.key
	I0731 12:27:13.720643    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986.pem (1338 bytes)
	W0731 12:27:13.720678    8642 certs.go:480] ignoring /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0731 12:27:13.720684    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 12:27:13.720707    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem (1082 bytes)
	I0731 12:27:13.720731    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:27:13.720748    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/key.pem (1675 bytes)
	I0731 12:27:13.720787    8642 certs.go:484] found cert: /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0731 12:27:13.721138    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:27:13.728779    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:27:13.736084    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:27:13.742594    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 12:27:13.748948    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 12:27:13.755968    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 12:27:13.762747    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:27:13.769506    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 12:27:13.776712    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0731 12:27:13.784557    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0731 12:27:13.791822    8642 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:27:13.799519    8642 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:27:13.805464    8642 ssh_runner.go:195] Run: openssl version
	I0731 12:27:13.807596    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:27:13.810672    8642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:13.812151    8642 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 19:26 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:13.812174    8642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:27:13.813975    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:27:13.816998    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0731 12:27:13.820406    8642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0731 12:27:13.822000    8642 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 19:15 /usr/share/ca-certificates/6986.pem
	I0731 12:27:13.822030    8642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0731 12:27:13.823849    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0731 12:27:13.826996    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0731 12:27:13.829901    8642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0731 12:27:13.831422    8642 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 19:15 /usr/share/ca-certificates/69862.pem
	I0731 12:27:13.831449    8642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0731 12:27:13.833308    8642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:27:13.837050    8642 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 12:27:13.838705    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:27:13.840805    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:27:13.842821    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:27:13.844871    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:27:13.846887    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:27:13.848881    8642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:27:13.850992    8642 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51260 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:27:13.851073    8642 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:27:13.863863    8642 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:27:13.867162    8642 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 12:27:13.867170    8642 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 12:27:13.867205    8642 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 12:27:13.870198    8642 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:27:13.870243    8642 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-155000" does not appear in /Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:27:13.870257    8642 kubeconfig.go:62] /Users/jenkins/minikube-integration/19355-6494/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-155000" cluster setting kubeconfig missing "stopped-upgrade-155000" context setting]
	I0731 12:27:13.870435    8642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/kubeconfig: {Name:mk9ca083b2af3b573139e6661162870891ccebe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:27:13.871040    8642 kapi.go:59] client config for stopped-upgrade-155000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10227c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:27:13.871924    8642 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 12:27:13.875229    8642 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-155000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 12:27:13.875235    8642 kubeadm.go:1160] stopping kube-system containers ...
	I0731 12:27:13.875284    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:27:13.886862    8642 docker.go:483] Stopping containers: [844d08449ca3 2c107ecdfbc5 8d7ed7e39f35 de06d8fe8794 17f9db5ab999 acc1c033fa0f f47d526e0a42 e15e07a5136c]
	I0731 12:27:13.886929    8642 ssh_runner.go:195] Run: docker stop 844d08449ca3 2c107ecdfbc5 8d7ed7e39f35 de06d8fe8794 17f9db5ab999 acc1c033fa0f f47d526e0a42 e15e07a5136c
	I0731 12:27:13.898510    8642 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 12:27:13.904366    8642 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:27:13.907765    8642 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:27:13.907774    8642 kubeadm.go:157] found existing configuration files:
	
	I0731 12:27:13.907812    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/admin.conf
	I0731 12:27:13.910589    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:27:13.910635    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:27:13.913551    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/kubelet.conf
	I0731 12:27:13.916592    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:27:13.916648    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:27:13.920365    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/controller-manager.conf
	I0731 12:27:13.924000    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:27:13.924053    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:27:13.927343    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/scheduler.conf
	I0731 12:27:13.930514    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:27:13.930567    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:27:13.933469    8642 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:27:13.936877    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:13.962929    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:14.306434    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:14.434777    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:14.455842    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:27:14.478516    8642 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:27:14.478589    8642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:14.980833    8642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:15.480638    8642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:27:15.484663    8642 api_server.go:72] duration metric: took 1.006182875s to wait for apiserver process to appear ...
	I0731 12:27:15.484672    8642 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:27:15.484682    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:20.486745    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:20.486829    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:25.487587    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:25.487606    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:30.487948    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:30.487969    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:35.489005    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:35.489053    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:40.490129    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:40.490197    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:45.491725    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:45.491790    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:50.493525    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:50.493587    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:27:55.495989    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:27:55.496039    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:00.497787    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:00.497827    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:05.499935    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:05.499974    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:10.500703    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:10.500788    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:15.503192    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:15.503323    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:15.515992    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:28:15.516079    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:15.528064    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:28:15.528130    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:15.537848    8642 logs.go:276] 0 containers: []
	W0731 12:28:15.537863    8642 logs.go:278] No container was found matching "coredns"
	I0731 12:28:15.537938    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:15.548904    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:28:15.548973    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:15.558535    8642 logs.go:276] 0 containers: []
	W0731 12:28:15.558547    8642 logs.go:278] No container was found matching "kube-proxy"
	I0731 12:28:15.558607    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:15.569253    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:28:15.569322    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:15.579547    8642 logs.go:276] 0 containers: []
	W0731 12:28:15.579557    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:15.579611    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:15.589998    8642 logs.go:276] 0 containers: []
	W0731 12:28:15.590010    8642 logs.go:278] No container was found matching "storage-provisioner"
	I0731 12:28:15.590016    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:15.590022    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:28:15.617826    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:28:15.617835    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:28:15.635218    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:28:15.635232    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:28:15.657951    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:15.657967    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:15.682835    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:28:15.682848    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:15.694521    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:28:15.694534    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:28:15.712755    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:15.712766    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:15.717004    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:15.717014    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:15.826748    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:28:15.826764    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:28:15.840921    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:28:15.840934    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:28:15.853877    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:28:15.853890    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:28:15.867363    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:28:15.867372    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:28:15.882555    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:28:15.882566    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:28:18.402662    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:23.403138    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:23.403392    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:23.431377    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:28:23.431482    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:23.448435    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:28:23.448529    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:23.460880    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:28:23.460953    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:23.471522    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:28:23.471602    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:23.481534    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:28:23.481609    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:23.492335    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:28:23.492404    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:23.503286    8642 logs.go:276] 0 containers: []
	W0731 12:28:23.503297    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:23.503357    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:23.514379    8642 logs.go:276] 1 containers: [9c98bd426c62]
	I0731 12:28:23.514395    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:28:23.514407    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:23.526389    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:28:23.526400    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:28:23.543677    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:28:23.543686    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:28:23.556109    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:28:23.556124    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:28:23.567721    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:28:23.567732    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:28:23.585723    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:28:23.585734    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:28:23.606931    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:28:23.606941    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:28:23.621255    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:28:23.621265    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:28:23.643045    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:28:23.643055    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:28:23.654815    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:23.654824    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:23.680631    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:23.680639    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:28:23.706652    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:23.706746    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:23.707810    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:23.707818    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:23.752611    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:28:23.752624    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:28:23.766902    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:23.766913    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:23.771669    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:28:23.771675    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:28:23.786966    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:28:23.786979    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:28:23.798530    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:23.798539    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:28:23.798574    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:28:23.798580    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:23.798587    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:23.798591    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:23.798594    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:28:33.802479    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:38.804929    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:38.805173    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:38.823190    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:28:38.823280    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:38.846924    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:28:38.847012    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:38.859662    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:28:38.859742    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:38.870006    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:28:38.870078    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:38.880237    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:28:38.880311    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:38.890867    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:28:38.890937    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:38.901126    8642 logs.go:276] 0 containers: []
	W0731 12:28:38.901136    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:38.901198    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:38.911600    8642 logs.go:276] 1 containers: [9c98bd426c62]
	I0731 12:28:38.911616    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:28:38.911621    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:28:38.931233    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:28:38.931247    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:28:38.942906    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:28:38.942920    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:28:38.963481    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:28:38.963493    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:28:38.976998    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:28:38.977012    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:28:38.991516    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:28:38.991525    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:28:39.019462    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:28:39.019474    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:28:39.033545    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:28:39.033556    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:39.045252    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:39.045262    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:39.080653    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:28:39.080663    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:28:39.094309    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:28:39.094318    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:28:39.106350    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:28:39.106361    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:28:39.116956    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:39.116969    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:28:39.142444    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:39.142542    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:39.143645    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:39.143651    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:39.147818    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:28:39.147826    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:28:39.165626    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:39.165639    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:39.189684    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:39.189692    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:28:39.189719    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:28:39.189723    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:39.189728    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:39.189732    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:39.189735    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:28:49.193521    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:28:54.195650    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:28:54.195942    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:28:54.223716    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:28:54.223846    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:28:54.241359    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:28:54.241456    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:28:54.254897    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:28:54.254984    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:28:54.266693    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:28:54.266776    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:28:54.277491    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:28:54.277570    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:28:54.289971    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:28:54.290049    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:28:54.300463    8642 logs.go:276] 0 containers: []
	W0731 12:28:54.300473    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:28:54.300532    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:28:54.311045    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:28:54.311063    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:28:54.311069    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:28:54.335843    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:28:54.335853    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:28:54.360954    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:54.361047    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:54.362154    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:28:54.362158    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:28:54.373844    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:28:54.373855    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:28:54.395714    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:28:54.395726    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:28:54.410457    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:28:54.410467    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:28:54.452215    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:28:54.452231    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:28:54.465100    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:28:54.465110    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:28:54.476277    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:28:54.476288    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:28:54.487347    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:28:54.487358    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:28:54.499246    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:28:54.499257    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:28:54.503676    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:28:54.503687    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:28:54.527192    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:28:54.527203    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:28:54.540154    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:28:54.540165    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:28:54.557273    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:28:54.557282    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:28:54.576440    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:28:54.576452    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:28:54.590261    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:28:54.590274    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:28:54.604487    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:54.604497    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:28:54.604519    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:28:54.604524    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:28:54.604527    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:28:54.604531    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:28:54.604534    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:04.608171    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:09.610247    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:09.610512    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:09.637084    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:29:09.637217    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:09.654543    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:29:09.654637    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:09.668057    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:29:09.668137    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:09.685678    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:29:09.685758    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:09.696246    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:29:09.696317    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:09.710141    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:29:09.710215    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:09.720701    8642 logs.go:276] 0 containers: []
	W0731 12:29:09.720713    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:09.720776    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:09.732152    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:29:09.732174    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:29:09.732179    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:29:09.750107    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:29:09.750117    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:29:09.761999    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:29:09.762009    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:29:09.773728    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:09.773739    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:29:09.798936    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:09.799033    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:09.800170    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:09.800178    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:09.836797    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:29:09.836813    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:29:09.850197    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:29:09.850205    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:29:09.864587    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:29:09.864602    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:29:09.876337    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:09.876346    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:09.901251    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:09.901259    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:09.905112    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:29:09.905120    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:29:09.922466    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:29:09.922479    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:29:09.935295    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:29:09.935304    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:29:09.949386    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:29:09.949397    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:29:09.963804    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:29:09.963816    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:29:09.975644    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:29:09.975655    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:09.988116    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:29:09.988130    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:29:10.010919    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:10.010929    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:29:10.010955    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:29:10.010959    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:10.010963    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:10.010967    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:10.010970    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:20.014787    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:25.017024    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:25.017283    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:25.036017    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:29:25.036116    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:25.050356    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:29:25.050426    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:25.062728    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:29:25.062804    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:25.073241    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:29:25.073307    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:25.097518    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:29:25.097595    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:25.122366    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:29:25.122428    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:25.136434    8642 logs.go:276] 0 containers: []
	W0731 12:29:25.136446    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:25.136500    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:25.146974    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:29:25.146993    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:29:25.146999    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:29:25.160789    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:29:25.160798    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:29:25.172298    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:25.172311    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:25.176237    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:29:25.176245    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:29:25.198378    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:29:25.198393    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:29:25.216614    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:29:25.216624    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:29:25.228343    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:29:25.228358    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:29:25.242532    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:25.242543    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:25.266768    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:29:25.266779    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:25.278492    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:29:25.278506    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:29:25.293959    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:25.293973    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:25.332311    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:29:25.332324    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:29:25.346749    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:29:25.346759    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:29:25.359378    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:29:25.359388    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:29:25.372031    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:29:25.372045    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:29:25.389334    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:29:25.389348    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:29:25.400481    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:25.400493    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:29:25.425000    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:25.425094    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:25.426242    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:25.426248    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:29:25.426270    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:29:25.426273    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:25.426277    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:25.426280    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:25.426283    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:35.430156    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:40.432771    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:40.433298    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:40.470649    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:29:40.470807    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:40.491705    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:29:40.491816    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:40.505942    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:29:40.506024    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:40.519450    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:29:40.519519    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:40.532868    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:29:40.532938    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:40.543428    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:29:40.543495    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:40.553933    8642 logs.go:276] 0 containers: []
	W0731 12:29:40.553948    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:40.554010    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:40.569342    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:29:40.569363    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:29:40.569369    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:29:40.580499    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:29:40.580509    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:29:40.592069    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:29:40.592083    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:29:40.609876    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:29:40.609889    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:29:40.627250    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:40.627260    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:29:40.653062    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:40.653155    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:40.654297    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:29:40.654304    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:29:40.669752    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:29:40.669765    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:29:40.684944    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:29:40.684957    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:29:40.696352    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:29:40.696362    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:29:40.707329    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:40.707339    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:40.732621    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:40.732637    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:40.767561    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:29:40.767573    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:29:40.780269    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:29:40.780282    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:29:40.802444    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:29:40.802456    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:40.813793    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:40.813802    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:40.817894    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:29:40.817900    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:29:40.831712    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:29:40.831728    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:29:40.846676    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:40.846687    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:29:40.846714    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:29:40.846723    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:40.846728    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:40.846733    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:40.846735    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:29:50.850576    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:29:55.852786    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:29:55.853001    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:29:55.878213    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:29:55.878335    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:29:55.894348    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:29:55.894433    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:29:55.910076    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:29:55.910158    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:29:55.921840    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:29:55.921909    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:29:55.932357    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:29:55.932422    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:29:55.942949    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:29:55.943023    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:29:55.979453    8642 logs.go:276] 0 containers: []
	W0731 12:29:55.979466    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:29:55.979530    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:29:55.993059    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:29:55.993077    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:29:55.993083    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:29:55.997776    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:29:55.997782    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:29:56.017970    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:29:56.017981    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:29:56.030711    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:29:56.030725    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:29:56.042160    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:29:56.042173    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:29:56.060335    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:29:56.060348    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:29:56.073298    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:29:56.073308    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:29:56.087702    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:29:56.087714    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:29:56.100232    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:29:56.100243    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:29:56.123539    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:29:56.123547    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:29:56.162202    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:29:56.162212    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:29:56.176007    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:29:56.176019    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:29:56.198128    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:29:56.198138    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:29:56.212224    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:29:56.212235    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:29:56.230739    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:29:56.230751    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:29:56.242837    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:29:56.242848    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:29:56.268837    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:56.268934    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:56.270121    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:29:56.270127    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:29:56.281813    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:56.281822    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:29:56.281849    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:29:56.281853    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:29:56.281858    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:29:56.281862    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:29:56.281865    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:30:06.285714    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:11.287777    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:11.287922    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:11.301550    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:30:11.301636    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:11.316848    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:30:11.316921    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:11.327076    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:30:11.327159    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:11.337531    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:30:11.337603    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:11.349407    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:30:11.349478    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:11.360020    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:30:11.360090    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:11.371151    8642 logs.go:276] 0 containers: []
	W0731 12:30:11.371164    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:11.371226    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:11.381815    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:30:11.381835    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:30:11.381840    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:11.393504    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:30:11.393516    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:30:11.407214    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:30:11.407225    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:30:11.425617    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:30:11.425631    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:30:11.445728    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:30:11.445739    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:30:11.463381    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:30:11.463391    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:30:11.485703    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:30:11.485714    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:30:11.502839    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:30:11.502850    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:30:11.514193    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:11.514203    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:30:11.540659    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:11.540755    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:11.541935    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:11.541939    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:11.577903    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:30:11.577916    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:30:11.592816    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:30:11.592826    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:30:11.615570    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:30:11.615580    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:30:11.633233    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:11.633245    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:11.637830    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:30:11.637838    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:30:11.649585    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:30:11.649600    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:30:11.660986    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:11.661000    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:11.686029    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:11.686038    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:30:11.686065    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:30:11.686069    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:11.686073    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:11.686077    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:11.686082    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:30:21.689881    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:26.690208    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:26.690440    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:26.732863    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:30:26.732966    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:26.755303    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:30:26.755382    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:26.766661    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:30:26.766736    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:26.778389    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:30:26.778462    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:26.796284    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:30:26.796353    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:26.806667    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:30:26.806737    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:26.817032    8642 logs.go:276] 0 containers: []
	W0731 12:30:26.817045    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:26.817103    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:26.828099    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:30:26.828119    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:30:26.828125    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:30:26.842545    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:30:26.842559    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:30:26.856815    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:30:26.856825    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:30:26.868219    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:26.868230    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:30:26.894324    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:26.894415    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:26.895546    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:26.895552    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:26.933339    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:30:26.933350    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:30:26.946319    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:30:26.946331    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:30:26.960641    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:26.960652    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:26.964911    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:30:26.964920    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:30:26.982902    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:30:26.982915    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:30:27.002351    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:30:27.002361    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:30:27.015950    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:30:27.015960    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:30:27.037309    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:30:27.037324    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:30:27.049301    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:27.049315    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:27.074390    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:30:27.074399    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:30:27.089150    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:30:27.089161    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:30:27.100555    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:30:27.100566    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:27.112582    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:27.112592    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:30:27.112618    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:30:27.112638    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:27.112642    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:27.112646    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:27.112649    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:30:37.116261    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:42.118971    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:42.119348    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:42.152756    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:30:42.152895    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:42.173292    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:30:42.173390    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:42.187709    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:30:42.187790    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:42.199944    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:30:42.200036    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:42.210072    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:30:42.210140    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:42.220559    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:30:42.220627    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:42.231423    8642 logs.go:276] 0 containers: []
	W0731 12:30:42.231433    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:42.231491    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:42.241744    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:30:42.241763    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:30:42.241769    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:30:42.255090    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:30:42.255101    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:30:42.268975    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:30:42.268988    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:30:42.284608    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:30:42.284619    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:30:42.296177    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:42.296187    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:30:42.321417    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:42.321509    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:42.322624    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:42.322629    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:42.326555    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:42.326563    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:42.360879    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:30:42.360891    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:30:42.382214    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:30:42.382235    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:30:42.396497    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:30:42.396509    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:42.409277    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:30:42.409290    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:30:42.424528    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:30:42.424540    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:30:42.438978    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:30:42.438989    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:30:42.450949    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:30:42.450960    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:30:42.468210    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:42.468220    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:42.492075    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:30:42.492086    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:30:42.506238    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:30:42.506251    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:30:42.528679    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:42.528693    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:30:42.528725    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:30:42.528729    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:42.528734    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:42.528738    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:42.528741    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:30:52.532617    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:30:57.535074    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:30:57.535256    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:30:57.560506    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:30:57.560631    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:30:57.576098    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:30:57.576181    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:30:57.588966    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:30:57.589047    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:30:57.600220    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:30:57.600285    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:30:57.614455    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:30:57.614527    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:30:57.625428    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:30:57.625502    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:30:57.635212    8642 logs.go:276] 0 containers: []
	W0731 12:30:57.635222    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:30:57.635282    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:30:57.645678    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:30:57.645695    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:30:57.645701    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:30:57.650013    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:30:57.650019    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:30:57.664001    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:30:57.664012    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:30:57.675642    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:30:57.675653    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:30:57.697687    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:30:57.697698    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:30:57.714594    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:30:57.714605    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:30:57.739330    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:30:57.739340    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:30:57.781798    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:30:57.781810    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:30:57.796988    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:30:57.796998    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:30:57.808276    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:30:57.808286    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:30:57.820243    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:30:57.820256    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:30:57.838491    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:30:57.838502    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:30:57.850031    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:30:57.850042    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:30:57.875777    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:57.875870    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:57.876978    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:30:57.876982    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:30:57.889590    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:30:57.889601    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:30:57.903076    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:30:57.903090    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:30:57.917191    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:30:57.917206    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:30:57.929499    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:57.929509    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:30:57.929535    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:30:57.929539    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:30:57.929551    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:30:57.929555    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:30:57.929558    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:31:07.931447    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:12.933735    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:12.933940    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:31:12.963261    8642 logs.go:276] 2 containers: [38f139b04810 f0142d5d757f]
	I0731 12:31:12.963398    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:31:12.981759    8642 logs.go:276] 2 containers: [4e2a440edbfb 2c107ecdfbc5]
	I0731 12:31:12.981845    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:31:12.995147    8642 logs.go:276] 1 containers: [c00b4e88bc49]
	I0731 12:31:12.995220    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:31:13.006510    8642 logs.go:276] 2 containers: [154ad4415e24 de06d8fe8794]
	I0731 12:31:13.006584    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:31:13.019399    8642 logs.go:276] 1 containers: [dcabf240dffc]
	I0731 12:31:13.019463    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:31:13.030024    8642 logs.go:276] 2 containers: [fea06384377b 09e791a2079d]
	I0731 12:31:13.030097    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:31:13.039643    8642 logs.go:276] 0 containers: []
	W0731 12:31:13.039654    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:31:13.039710    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:31:13.049933    8642 logs.go:276] 2 containers: [4967399178b6 9c98bd426c62]
	I0731 12:31:13.049959    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:31:13.049964    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:31:13.076790    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:31:13.076886    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:31:13.078010    8642 logs.go:123] Gathering logs for kube-proxy [dcabf240dffc] ...
	I0731 12:31:13.078017    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabf240dffc"
	I0731 12:31:13.095304    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:31:13.095315    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:31:13.107967    8642 logs.go:123] Gathering logs for etcd [4e2a440edbfb] ...
	I0731 12:31:13.107982    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2a440edbfb"
	I0731 12:31:13.121904    8642 logs.go:123] Gathering logs for etcd [2c107ecdfbc5] ...
	I0731 12:31:13.121913    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c107ecdfbc5"
	I0731 12:31:13.142825    8642 logs.go:123] Gathering logs for kube-scheduler [154ad4415e24] ...
	I0731 12:31:13.142840    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 154ad4415e24"
	I0731 12:31:13.165369    8642 logs.go:123] Gathering logs for kube-scheduler [de06d8fe8794] ...
	I0731 12:31:13.165380    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de06d8fe8794"
	I0731 12:31:13.180041    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:31:13.180051    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:31:13.204091    8642 logs.go:123] Gathering logs for kube-apiserver [38f139b04810] ...
	I0731 12:31:13.204098    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38f139b04810"
	I0731 12:31:13.224707    8642 logs.go:123] Gathering logs for coredns [c00b4e88bc49] ...
	I0731 12:31:13.224717    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00b4e88bc49"
	I0731 12:31:13.236324    8642 logs.go:123] Gathering logs for kube-controller-manager [fea06384377b] ...
	I0731 12:31:13.236335    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea06384377b"
	I0731 12:31:13.256006    8642 logs.go:123] Gathering logs for storage-provisioner [4967399178b6] ...
	I0731 12:31:13.256020    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4967399178b6"
	I0731 12:31:13.275681    8642 logs.go:123] Gathering logs for storage-provisioner [9c98bd426c62] ...
	I0731 12:31:13.275694    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c98bd426c62"
	I0731 12:31:13.287137    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:31:13.287148    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:31:13.292179    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:31:13.292186    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:31:13.328855    8642 logs.go:123] Gathering logs for kube-apiserver [f0142d5d757f] ...
	I0731 12:31:13.328869    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0142d5d757f"
	I0731 12:31:13.342467    8642 logs.go:123] Gathering logs for kube-controller-manager [09e791a2079d] ...
	I0731 12:31:13.342481    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e791a2079d"
	I0731 12:31:13.361686    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:31:13.361696    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:31:13.361722    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:31:13.361726    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:31:13.361730    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:31:13.361733    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:31:13.361737    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:31:23.365558    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:28.367807    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:28.367850    8642 kubeadm.go:597] duration metric: took 4m14.509327709s to restartPrimaryControlPlane
	W0731 12:31:28.367901    8642 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:31:28.367919    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:31:29.308771    8642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:31:29.313645    8642 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:31:29.316480    8642 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:31:29.319251    8642 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:31:29.319257    8642 kubeadm.go:157] found existing configuration files:
	
	I0731 12:31:29.319279    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/admin.conf
	I0731 12:31:29.321858    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:31:29.321885    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:31:29.324558    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/kubelet.conf
	I0731 12:31:29.327827    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:31:29.327850    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:31:29.330693    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/controller-manager.conf
	I0731 12:31:29.333142    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:31:29.333167    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:31:29.336141    8642 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/scheduler.conf
	I0731 12:31:29.339236    8642 kubeadm.go:163] "https://control-plane.minikube.internal:51260" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51260 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:31:29.339256    8642 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:31:29.342038    8642 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:31:29.359965    8642 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:31:29.359996    8642 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:31:29.418543    8642 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:31:29.418627    8642 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:31:29.418673    8642 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:31:29.467736    8642 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:31:29.471779    8642 out.go:204]   - Generating certificates and keys ...
	I0731 12:31:29.471815    8642 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:31:29.471845    8642 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:31:29.471880    8642 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:31:29.471925    8642 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:31:29.471963    8642 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:31:29.472008    8642 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:31:29.473548    8642 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:31:29.473581    8642 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:31:29.473619    8642 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:31:29.473665    8642 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:31:29.473685    8642 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:31:29.473715    8642 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:31:29.616962    8642 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:31:29.801903    8642 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:31:29.914312    8642 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:31:29.981148    8642 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:31:30.009503    8642 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:31:30.009955    8642 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:31:30.009978    8642 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:31:30.093823    8642 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:31:30.095238    8642 out.go:204]   - Booting up control plane ...
	I0731 12:31:30.095286    8642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:31:30.095344    8642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:31:30.095411    8642 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:31:30.095466    8642 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:31:30.095675    8642 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:31:34.595843    8642 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501250 seconds
	I0731 12:31:34.595913    8642 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:31:34.604571    8642 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:31:35.112987    8642 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:31:35.113168    8642 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-155000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:31:35.618896    8642 kubeadm.go:310] [bootstrap-token] Using token: ugsrn3.bj4t8nxbtxr4fh6p
	I0731 12:31:35.624960    8642 out.go:204]   - Configuring RBAC rules ...
	I0731 12:31:35.625026    8642 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:31:35.625075    8642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:31:35.633326    8642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:31:35.634241    8642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:31:35.635472    8642 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:31:35.637275    8642 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:31:35.640826    8642 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:31:35.828797    8642 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:31:36.024937    8642 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:31:36.025470    8642 kubeadm.go:310] 
	I0731 12:31:36.025591    8642 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:31:36.025599    8642 kubeadm.go:310] 
	I0731 12:31:36.025638    8642 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:31:36.025645    8642 kubeadm.go:310] 
	I0731 12:31:36.025657    8642 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:31:36.025692    8642 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:31:36.025719    8642 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:31:36.025722    8642 kubeadm.go:310] 
	I0731 12:31:36.025747    8642 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:31:36.025753    8642 kubeadm.go:310] 
	I0731 12:31:36.025776    8642 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:31:36.025779    8642 kubeadm.go:310] 
	I0731 12:31:36.025802    8642 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:31:36.025845    8642 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:31:36.025889    8642 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:31:36.025892    8642 kubeadm.go:310] 
	I0731 12:31:36.025928    8642 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:31:36.025986    8642 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:31:36.025993    8642 kubeadm.go:310] 
	I0731 12:31:36.026038    8642 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ugsrn3.bj4t8nxbtxr4fh6p \
	I0731 12:31:36.026134    8642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:12522336a0e1241307a19e7db623c316cbfcdae7801755c9cc730f6bb46269a5 \
	I0731 12:31:36.026146    8642 kubeadm.go:310] 	--control-plane 
	I0731 12:31:36.026149    8642 kubeadm.go:310] 
	I0731 12:31:36.026201    8642 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:31:36.026206    8642 kubeadm.go:310] 
	I0731 12:31:36.026272    8642 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ugsrn3.bj4t8nxbtxr4fh6p \
	I0731 12:31:36.026321    8642 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:12522336a0e1241307a19e7db623c316cbfcdae7801755c9cc730f6bb46269a5 
	I0731 12:31:36.026411    8642 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:31:36.026508    8642 cni.go:84] Creating CNI manager for ""
	I0731 12:31:36.026519    8642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:31:36.030882    8642 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:31:36.039967    8642 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:31:36.043420    8642 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:31:36.050256    8642 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:31:36.050316    8642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:31:36.050334    8642 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-155000 minikube.k8s.io/updated_at=2024_07_31T12_31_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825 minikube.k8s.io/name=stopped-upgrade-155000 minikube.k8s.io/primary=true
	I0731 12:31:36.099504    8642 kubeadm.go:1113] duration metric: took 49.24275ms to wait for elevateKubeSystemPrivileges
	I0731 12:31:36.099562    8642 ops.go:34] apiserver oom_adj: -16
	I0731 12:31:36.099575    8642 kubeadm.go:394] duration metric: took 4m22.257499708s to StartCluster
	I0731 12:31:36.099586    8642 settings.go:142] acquiring lock: {Name:mkc78613f561547480fb9245c5115e473f0ba2a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:31:36.099735    8642 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:31:36.100113    8642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/kubeconfig: {Name:mk9ca083b2af3b573139e6661162870891ccebe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:31:36.100327    8642 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:31:36.100378    8642 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:31:36.100420    8642 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-155000"
	I0731 12:31:36.100430    8642 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-155000"
	W0731 12:31:36.100435    8642 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:31:36.100445    8642 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-155000"
	I0731 12:31:36.100448    8642 host.go:66] Checking if "stopped-upgrade-155000" exists ...
	I0731 12:31:36.100454    8642 config.go:182] Loaded profile config "stopped-upgrade-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:31:36.100457    8642 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-155000"
	I0731 12:31:36.104968    8642 out.go:177] * Verifying Kubernetes components...
	I0731 12:31:36.105756    8642 kapi.go:59] client config for stopped-upgrade-155000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/stopped-upgrade-155000/client.key", CAFile:"/Users/jenkins/minikube-integration/19355-6494/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10227c1b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:31:36.109198    8642 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-155000"
	W0731 12:31:36.109217    8642 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:31:36.109229    8642 host.go:66] Checking if "stopped-upgrade-155000" exists ...
	I0731 12:31:36.109801    8642 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:31:36.109807    8642 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:31:36.109813    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	I0731 12:31:36.112964    8642 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:31:36.116989    8642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:31:36.121010    8642 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:31:36.121021    8642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:31:36.121030    8642 sshutil.go:53] new ssh client: &{IP:localhost Port:51225 SSHKeyPath:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/stopped-upgrade-155000/id_rsa Username:docker}
	I0731 12:31:36.207185    8642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:31:36.213969    8642 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:31:36.214036    8642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:31:36.218743    8642 api_server.go:72] duration metric: took 118.404542ms to wait for apiserver process to appear ...
	I0731 12:31:36.218754    8642 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:31:36.218764    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:36.251044    8642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:31:36.266613    8642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:31:41.220738    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:41.220770    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:46.220846    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:46.220869    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:51.221025    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:51.221044    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:31:56.221410    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:31:56.221447    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:01.221833    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:01.221874    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:06.222478    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:06.222515    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:32:06.628483    8642 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:32:06.632827    8642 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:32:06.639593    8642 addons.go:510] duration metric: took 30.5402645s for enable addons: enabled=[storage-provisioner]
	I0731 12:32:11.223650    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:11.223688    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:16.224824    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:16.224863    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:21.226315    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:21.226402    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:26.227829    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:26.227869    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:31.229915    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:31.229937    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:36.231931    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:36.232041    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:36.242668    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:32:36.242748    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:36.252715    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:32:36.252787    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:36.263261    8642 logs.go:276] 2 containers: [39ab3fb165fd 60a1ac98657e]
	I0731 12:32:36.263325    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:36.274074    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:32:36.274149    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:36.284378    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:32:36.284453    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:36.294907    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:32:36.294970    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:36.305032    8642 logs.go:276] 0 containers: []
	W0731 12:32:36.305043    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:36.305096    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:36.315645    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:32:36.315665    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:32:36.315671    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:32:36.337651    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:32:36.337663    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:32:36.348935    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:32:36.348946    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:32:36.364128    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:32:36.364139    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:32:36.381795    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:32:36.381805    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:32:36.395868    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:36.395879    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:36.400305    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:36.400315    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:36.440454    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:32:36.440468    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:32:36.453157    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:32:36.453171    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:32:36.464985    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:32:36.464998    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:32:36.480873    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:36.480886    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:36.505936    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:32:36.505944    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:36.517026    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:36.517037    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:32:36.532390    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:32:36.532486    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:32:36.549947    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:32:36.549955    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:32:36.549983    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:32:36.549988    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:32:36.549993    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:32:36.549996    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:32:36.549999    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:32:46.553128    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:32:51.555230    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:32:51.555353    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:32:51.567545    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:32:51.567657    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:32:51.578453    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:32:51.578524    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:32:51.589022    8642 logs.go:276] 2 containers: [39ab3fb165fd 60a1ac98657e]
	I0731 12:32:51.589094    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:32:51.599739    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:32:51.599815    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:32:51.610061    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:32:51.610134    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:32:51.620783    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:32:51.620853    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:32:51.630950    8642 logs.go:276] 0 containers: []
	W0731 12:32:51.630960    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:32:51.631019    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:32:51.641244    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:32:51.641259    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:32:51.641264    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:32:51.658326    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:32:51.658421    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:32:51.676149    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:32:51.676160    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:32:51.711163    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:32:51.711174    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:32:51.725409    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:32:51.725422    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:32:51.739993    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:32:51.740006    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:32:51.756657    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:32:51.756670    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:32:51.768030    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:32:51.768045    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:32:51.772459    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:32:51.772467    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:32:51.788751    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:32:51.788761    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:32:51.800074    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:32:51.800088    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:32:51.814405    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:32:51.814415    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:32:51.830120    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:32:51.830131    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:32:51.841480    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:32:51.841491    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:32:51.866809    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:32:51.866817    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:32:51.866842    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:32:51.866849    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:32:51.866854    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:32:51.866859    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:32:51.866862    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:33:01.870673    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:06.872802    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:06.872879    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:06.883643    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:33:06.883720    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:06.893849    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:33:06.893915    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:06.904739    8642 logs.go:276] 2 containers: [39ab3fb165fd 60a1ac98657e]
	I0731 12:33:06.904867    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:06.915962    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:33:06.916033    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:06.929941    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:33:06.930008    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:06.940762    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:33:06.940837    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:06.951030    8642 logs.go:276] 0 containers: []
	W0731 12:33:06.951039    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:06.951092    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:06.961898    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:33:06.961913    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:33:06.961919    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:33:06.979017    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:33:06.979030    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:06.990813    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:33:06.990824    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:33:07.002707    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:33:07.002721    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:33:07.017604    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:07.017613    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:07.053881    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:33:07.053893    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:33:07.069059    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:33:07.069073    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:33:07.084041    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:33:07.084057    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:33:07.098542    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:33:07.098558    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:33:07.112897    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:33:07.112910    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:33:07.124758    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:07.124770    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:33:07.140356    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:07.140453    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:07.157677    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:07.157684    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:07.162238    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:07.162249    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:07.186719    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:07.186729    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:33:07.186756    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:33:07.186761    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:07.186782    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:07.186787    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:07.186790    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:33:17.189742    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:22.191729    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:22.191844    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:22.204413    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:33:22.204484    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:22.216494    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:33:22.216572    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:22.227953    8642 logs.go:276] 2 containers: [39ab3fb165fd 60a1ac98657e]
	I0731 12:33:22.228032    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:22.239274    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:33:22.239349    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:22.250778    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:33:22.250847    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:22.261795    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:33:22.261867    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:22.272388    8642 logs.go:276] 0 containers: []
	W0731 12:33:22.272399    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:22.272464    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:22.283662    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:33:22.283678    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:33:22.283683    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:33:22.295110    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:22.295120    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:22.320911    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:33:22.320923    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:22.333890    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:22.333901    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:22.338584    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:22.338595    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:22.376979    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:33:22.376991    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:33:22.394262    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:33:22.394275    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:33:22.409434    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:33:22.409451    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:33:22.428726    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:33:22.428743    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:33:22.441688    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:22.441701    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:33:22.457648    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:22.457749    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:22.475735    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:33:22.475751    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:33:22.488376    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:33:22.488387    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:33:22.515331    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:33:22.515342    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:33:22.527055    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:22.527067    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:33:22.527094    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:33:22.527099    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:22.527105    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:22.527111    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:22.527114    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:33:32.530910    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:37.531241    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:37.531454    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:37.549529    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:33:37.549620    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:37.562535    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:33:37.562617    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:37.573405    8642 logs.go:276] 2 containers: [39ab3fb165fd 60a1ac98657e]
	I0731 12:33:37.573475    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:37.584033    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:33:37.584105    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:37.597969    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:33:37.598050    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:37.608145    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:33:37.608210    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:37.618043    8642 logs.go:276] 0 containers: []
	W0731 12:33:37.618052    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:37.618108    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:37.629660    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:33:37.629674    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:33:37.629679    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:33:37.641779    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:33:37.641791    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:33:37.657191    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:33:37.657203    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:33:37.669067    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:33:37.669079    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:33:37.686099    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:37.686108    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:33:37.699336    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:37.699429    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:37.716921    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:37.716931    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:37.721312    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:37.721320    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:37.755033    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:33:37.755043    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:33:37.769361    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:33:37.769370    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:33:37.780956    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:33:37.780966    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:33:37.801043    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:33:37.801052    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:33:37.812961    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:37.812973    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:37.837149    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:33:37.837162    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:37.853716    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:37.853727    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:33:37.853754    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:33:37.853760    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:37.853764    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:37.853783    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:37.853793    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:33:47.856584    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:33:52.858757    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:33:52.858882    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:33:52.870357    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:33:52.870430    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:33:52.881743    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:33:52.881809    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:33:52.892911    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:33:52.892995    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:33:52.904119    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:33:52.904198    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:33:52.915194    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:33:52.915270    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:33:52.930430    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:33:52.930505    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:33:52.943557    8642 logs.go:276] 0 containers: []
	W0731 12:33:52.943568    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:33:52.943630    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:33:52.953735    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:33:52.953752    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:33:52.953758    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:33:52.988803    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:33:52.988814    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:33:53.005383    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:33:53.005393    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:33:53.025789    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:33:53.025800    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:33:53.037604    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:33:53.037615    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:33:53.049536    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:33:53.049551    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:33:53.061855    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:33:53.061866    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:33:53.086746    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:33:53.086753    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:33:53.101330    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:53.101423    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:53.118458    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:33:53.118463    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:33:53.132324    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:33:53.132337    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:33:53.143608    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:33:53.143622    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:33:53.161246    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:33:53.161256    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:33:53.165585    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:33:53.165591    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:33:53.179502    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:33:53.179514    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:33:53.191029    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:33:53.191041    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:33:53.202713    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:53.202724    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:33:53.202751    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:33:53.202755    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:33:53.202759    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:33:53.202764    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:33:53.202766    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:34:03.206636    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:08.208827    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:08.208996    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:08.221773    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:34:08.221857    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:08.234355    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:34:08.234432    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:08.244508    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:34:08.244579    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:08.254664    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:34:08.254742    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:08.264769    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:34:08.264831    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:08.275654    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:34:08.275735    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:08.286105    8642 logs.go:276] 0 containers: []
	W0731 12:34:08.286117    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:08.286178    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:08.297207    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:34:08.297226    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:34:08.297231    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:34:08.309070    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:34:08.309080    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:34:08.325367    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:34:08.325384    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:34:08.336964    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:34:08.336977    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:34:08.351122    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:34:08.351137    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:34:08.362674    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:08.362688    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:34:08.377574    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:08.377672    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:08.394764    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:08.394769    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:08.429458    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:34:08.429472    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:34:08.447017    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:34:08.447035    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:08.458528    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:08.458539    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:08.483201    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:08.483215    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:08.487392    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:34:08.487399    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:34:08.501129    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:34:08.501140    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:34:08.514397    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:34:08.514408    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:34:08.526248    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:34:08.526259    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:34:08.546113    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:08.546122    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:34:08.546149    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:34:08.546154    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:08.546158    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:08.546167    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:08.546170    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:34:18.549942    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:23.552034    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:23.552231    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:23.568759    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:34:23.568848    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:23.586014    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:34:23.586073    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:23.604123    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:34:23.604191    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:23.615341    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:34:23.615408    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:23.626579    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:34:23.626644    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:23.643004    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:34:23.643076    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:23.653798    8642 logs.go:276] 0 containers: []
	W0731 12:34:23.653809    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:23.653866    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:23.664326    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:34:23.664343    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:23.664349    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:23.701393    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:34:23.701406    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:34:23.713339    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:34:23.713351    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:34:23.725216    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:23.725227    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:23.729547    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:34:23.729553    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:34:23.741490    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:34:23.741501    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:34:23.753637    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:23.753647    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:23.778815    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:34:23.778823    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:23.791010    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:34:23.791023    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:34:23.802467    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:34:23.802477    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:34:23.817197    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:34:23.817208    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:34:23.833233    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:23.833245    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:34:23.846659    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:23.846753    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:23.864261    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:34:23.864270    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:34:23.876663    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:34:23.876673    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:34:23.894783    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:34:23.894792    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:34:23.910349    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:23.910358    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:34:23.910383    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:34:23.910387    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:23.910390    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:23.910394    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:23.910399    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:34:33.914168    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:38.916297    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:38.916513    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:38.951503    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:34:38.951589    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:38.966064    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:34:38.966148    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:38.978197    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:34:38.978272    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:38.989029    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:34:38.989093    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:39.001817    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:34:39.001886    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:39.012472    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:34:39.012534    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:39.023049    8642 logs.go:276] 0 containers: []
	W0731 12:34:39.023060    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:39.023118    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:39.033335    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:34:39.033352    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:39.033357    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:39.037749    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:34:39.037757    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:34:39.049407    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:34:39.049416    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:34:39.061643    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:34:39.061654    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:34:39.080548    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:34:39.080558    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:34:39.092724    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:39.092736    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:34:39.108009    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:39.108103    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:39.125662    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:34:39.125671    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:34:39.140146    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:34:39.140156    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:34:39.152681    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:39.152692    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:39.187337    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:34:39.187349    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:34:39.203339    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:34:39.203348    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:34:39.220841    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:34:39.220852    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:34:39.234272    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:34:39.234281    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:39.245929    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:34:39.245939    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:34:39.259416    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:39.259426    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:39.283258    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:39.283266    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:34:39.283288    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:34:39.283292    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:39.283296    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:39.283300    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:39.283311    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:34:49.285464    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:34:54.287527    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:34:54.287614    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:34:54.299205    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:34:54.299286    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:34:54.310364    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:34:54.310439    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:34:54.320907    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:34:54.320976    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:34:54.331530    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:34:54.331601    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:34:54.342392    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:34:54.342459    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:34:54.353448    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:34:54.353519    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:34:54.364246    8642 logs.go:276] 0 containers: []
	W0731 12:34:54.364258    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:34:54.364314    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:34:54.374517    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:34:54.374534    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:34:54.374539    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:34:54.411176    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:34:54.411187    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:34:54.425701    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:34:54.425710    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:34:54.442035    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:34:54.442046    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:34:54.453754    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:34:54.453764    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:34:54.465068    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:34:54.465077    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:34:54.478351    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:54.478447    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:54.495868    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:34:54.495878    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:34:54.511101    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:34:54.511115    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:34:54.529022    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:34:54.529034    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:34:54.533816    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:34:54.533825    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:34:54.546071    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:34:54.546086    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:34:54.570591    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:34:54.570603    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:34:54.584366    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:34:54.584376    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:34:54.600267    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:34:54.600277    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:34:54.612349    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:34:54.612360    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:34:54.624070    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:54.624081    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:34:54.624106    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:34:54.624112    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:34:54.624116    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:34:54.624120    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:34:54.624124    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:35:04.627906    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:09.629922    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:09.630080    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:09.642563    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:35:09.642641    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:09.658523    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:35:09.658599    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:09.668884    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:35:09.668963    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:09.685617    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:35:09.685686    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:09.695905    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:35:09.695976    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:09.710036    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:35:09.710105    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:09.720345    8642 logs.go:276] 0 containers: []
	W0731 12:35:09.720355    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:09.720407    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:09.731118    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:35:09.731137    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:09.731141    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:09.756719    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:09.756728    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:09.760996    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:35:09.761005    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:35:09.775187    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:35:09.775199    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:35:09.790804    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:35:09.790815    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:35:09.807715    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:09.807728    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:35:09.822669    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:35:09.822764    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:35:09.840531    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:35:09.840542    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:35:09.852850    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:09.852860    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:09.888957    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:35:09.888968    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:35:09.907048    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:35:09.907060    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:35:09.922592    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:35:09.922607    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:35:09.934202    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:35:09.934212    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:35:09.948681    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:35:09.948692    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:35:09.968773    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:35:09.968783    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:35:09.979803    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:35:09.979813    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:09.991771    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:35:09.991780    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:35:09.991806    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:35:09.991812    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:35:09.991816    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:35:09.991819    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:35:09.991822    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:35:19.995592    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:24.997623    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:24.997805    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:35:25.011800    8642 logs.go:276] 1 containers: [514180a8ec6c]
	I0731 12:35:25.011887    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:35:25.023207    8642 logs.go:276] 1 containers: [8bab61ccd5f4]
	I0731 12:35:25.023284    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:35:25.035598    8642 logs.go:276] 4 containers: [0609bb6196bf f441fb9400e5 39ab3fb165fd 60a1ac98657e]
	I0731 12:35:25.035669    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:35:25.046374    8642 logs.go:276] 1 containers: [11fafc9bce44]
	I0731 12:35:25.046449    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:35:25.056932    8642 logs.go:276] 1 containers: [dcc9547f5388]
	I0731 12:35:25.057004    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:35:25.067319    8642 logs.go:276] 1 containers: [e19b7b442a6c]
	I0731 12:35:25.067384    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:35:25.077991    8642 logs.go:276] 0 containers: []
	W0731 12:35:25.078001    8642 logs.go:278] No container was found matching "kindnet"
	I0731 12:35:25.078063    8642 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:35:25.088276    8642 logs.go:276] 1 containers: [88c293218170]
	I0731 12:35:25.088294    8642 logs.go:123] Gathering logs for kubelet ...
	I0731 12:35:25.088300    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:35:25.103362    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:35:25.103458    8642 logs.go:138] Found kubelet problem: Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:35:25.120995    8642 logs.go:123] Gathering logs for dmesg ...
	I0731 12:35:25.121002    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:35:25.125635    8642 logs.go:123] Gathering logs for coredns [39ab3fb165fd] ...
	I0731 12:35:25.125646    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 39ab3fb165fd"
	I0731 12:35:25.157618    8642 logs.go:123] Gathering logs for coredns [0609bb6196bf] ...
	I0731 12:35:25.157633    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0609bb6196bf"
	I0731 12:35:25.180271    8642 logs.go:123] Gathering logs for coredns [60a1ac98657e] ...
	I0731 12:35:25.180282    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60a1ac98657e"
	I0731 12:35:25.193036    8642 logs.go:123] Gathering logs for kube-proxy [dcc9547f5388] ...
	I0731 12:35:25.193047    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc9547f5388"
	I0731 12:35:25.204732    8642 logs.go:123] Gathering logs for kube-apiserver [514180a8ec6c] ...
	I0731 12:35:25.204744    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 514180a8ec6c"
	I0731 12:35:25.219103    8642 logs.go:123] Gathering logs for kube-controller-manager [e19b7b442a6c] ...
	I0731 12:35:25.219114    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e19b7b442a6c"
	I0731 12:35:25.236783    8642 logs.go:123] Gathering logs for storage-provisioner [88c293218170] ...
	I0731 12:35:25.236792    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88c293218170"
	I0731 12:35:25.248355    8642 logs.go:123] Gathering logs for Docker ...
	I0731 12:35:25.248366    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:35:25.272725    8642 logs.go:123] Gathering logs for container status ...
	I0731 12:35:25.272737    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:35:25.284239    8642 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:35:25.284251    8642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:35:25.320106    8642 logs.go:123] Gathering logs for etcd [8bab61ccd5f4] ...
	I0731 12:35:25.320117    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bab61ccd5f4"
	I0731 12:35:25.334332    8642 logs.go:123] Gathering logs for coredns [f441fb9400e5] ...
	I0731 12:35:25.334343    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f441fb9400e5"
	I0731 12:35:25.348454    8642 logs.go:123] Gathering logs for kube-scheduler [11fafc9bce44] ...
	I0731 12:35:25.348465    8642 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11fafc9bce44"
	I0731 12:35:25.363862    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:35:25.363877    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:35:25.363907    8642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:35:25.363935    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: W0731 19:28:17.375212    1653 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	W0731 12:35:25.363942    8642 out.go:239]   Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	  Jul 31 19:28:17 stopped-upgrade-155000 kubelet[1653]: E0731 19:28:17.375256    1653 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-155000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-155000' and this object
	I0731 12:35:25.363947    8642 out.go:304] Setting ErrFile to fd 2...
	I0731 12:35:25.363950    8642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:35:35.367030    8642 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:35:40.369191    8642 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:35:40.373004    8642 out.go:177] 
	W0731 12:35:40.376053    8642 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 12:35:40.376063    8642 out.go:239] * 
	* 
	W0731 12:35:40.376887    8642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:35:40.389055    8642 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-155000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (599.67s)

                                                
                                    
x
+
TestPause/serial/Start (9.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-603000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-603000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.846636916s)

                                                
                                                
-- stdout --
	* [pause-603000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-603000" primary control-plane node in "pause-603000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-603000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-603000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-603000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-603000 -n pause-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-603000 -n pause-603000: exit status 7 (65.143333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (11.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 : exit status 80 (11.161236375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-911000" primary control-plane node in "NoKubernetes-911000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-911000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-911000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000: exit status 7 (53.459208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (11.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 : exit status 80 (7.394116334s)

                                                
                                                
-- stdout --
	* [NoKubernetes-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-911000
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-911000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000: exit status 7 (49.842167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 : exit status 80 (7.424262959s)

                                                
                                                
-- stdout --
	* [NoKubernetes-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-911000
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-911000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000: exit status 7 (30.874667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (7.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.94s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19355
- KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1339757960/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.94s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.63s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19355
- KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2916880957/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 : exit status 80 (5.328169375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-911000
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-911000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-911000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-911000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-911000 -n NoKubernetes-911000: exit status 7 (67.668375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-911000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.866392875s)

                                                
                                                
-- stdout --
	* [auto-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-452000" primary control-plane node in "auto-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:37:23.449330    9417 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:37:23.449454    9417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:23.449457    9417 out.go:304] Setting ErrFile to fd 2...
	I0731 12:37:23.449460    9417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:23.449611    9417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:37:23.450687    9417 out.go:298] Setting JSON to false
	I0731 12:37:23.466637    9417 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5806,"bootTime":1722448837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:37:23.466704    9417 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:37:23.473500    9417 out.go:177] * [auto-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:37:23.481489    9417 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:37:23.481546    9417 notify.go:220] Checking for updates...
	I0731 12:37:23.488496    9417 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:37:23.491490    9417 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:37:23.494473    9417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:37:23.497493    9417 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:37:23.500499    9417 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:37:23.503836    9417 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:23.503904    9417 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:23.503960    9417 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:37:23.508427    9417 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:37:23.515416    9417 start.go:297] selected driver: qemu2
	I0731 12:37:23.515421    9417 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:37:23.515427    9417 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:37:23.517492    9417 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:37:23.520478    9417 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:37:23.522006    9417 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:37:23.522022    9417 cni.go:84] Creating CNI manager for ""
	I0731 12:37:23.522029    9417 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:37:23.522032    9417 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:37:23.522055    9417 start.go:340] cluster config:
	{Name:auto-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:37:23.525537    9417 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:37:23.532483    9417 out.go:177] * Starting "auto-452000" primary control-plane node in "auto-452000" cluster
	I0731 12:37:23.536445    9417 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:37:23.536461    9417 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:37:23.536476    9417 cache.go:56] Caching tarball of preloaded images
	I0731 12:37:23.536553    9417 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:37:23.536565    9417 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:37:23.536617    9417 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/auto-452000/config.json ...
	I0731 12:37:23.536629    9417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/auto-452000/config.json: {Name:mkb8a5d3e6c9efb43d71efd27becbbbaf7ce83c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:37:23.536844    9417 start.go:360] acquireMachinesLock for auto-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:23.536877    9417 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "auto-452000"
	I0731 12:37:23.536888    9417 start.go:93] Provisioning new machine with config: &{Name:auto-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:23.536926    9417 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:23.544387    9417 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:37:23.561546    9417 start.go:159] libmachine.API.Create for "auto-452000" (driver="qemu2")
	I0731 12:37:23.561574    9417 client.go:168] LocalClient.Create starting
	I0731 12:37:23.561634    9417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:37:23.561662    9417 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:23.561672    9417 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:23.561709    9417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:37:23.561731    9417 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:23.561743    9417 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:23.562097    9417 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:37:23.713752    9417 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:23.792476    9417 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:23.792487    9417 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:23.792671    9417 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2
	I0731 12:37:23.802033    9417 main.go:141] libmachine: STDOUT: 
	I0731 12:37:23.802048    9417 main.go:141] libmachine: STDERR: 
	I0731 12:37:23.802089    9417 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2 +20000M
	I0731 12:37:23.809873    9417 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:23.809896    9417 main.go:141] libmachine: STDERR: 
	I0731 12:37:23.809910    9417 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2
	I0731 12:37:23.809919    9417 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:23.809928    9417 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:23.809969    9417 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:c2:af:2b:c9:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2
	I0731 12:37:23.811563    9417 main.go:141] libmachine: STDOUT: 
	I0731 12:37:23.811577    9417 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:23.811597    9417 client.go:171] duration metric: took 250.023875ms to LocalClient.Create
	I0731 12:37:25.813741    9417 start.go:128] duration metric: took 2.276833208s to createHost
	I0731 12:37:25.813808    9417 start.go:83] releasing machines lock for "auto-452000", held for 2.276961125s
	W0731 12:37:25.813863    9417 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:25.828924    9417 out.go:177] * Deleting "auto-452000" in qemu2 ...
	W0731 12:37:25.858150    9417 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:25.858176    9417 start.go:729] Will try again in 5 seconds ...
	I0731 12:37:30.860305    9417 start.go:360] acquireMachinesLock for auto-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:30.860813    9417 start.go:364] duration metric: took 382.542µs to acquireMachinesLock for "auto-452000"
	I0731 12:37:30.860944    9417 start.go:93] Provisioning new machine with config: &{Name:auto-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:30.861245    9417 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:30.877146    9417 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:37:30.929264    9417 start.go:159] libmachine.API.Create for "auto-452000" (driver="qemu2")
	I0731 12:37:30.929320    9417 client.go:168] LocalClient.Create starting
	I0731 12:37:30.929457    9417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:37:30.929521    9417 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:30.929536    9417 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:30.929620    9417 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:37:30.929666    9417 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:30.929681    9417 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:30.930201    9417 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:37:31.101121    9417 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:31.220484    9417 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:31.220489    9417 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:31.220666    9417 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2
	I0731 12:37:31.229726    9417 main.go:141] libmachine: STDOUT: 
	I0731 12:37:31.229745    9417 main.go:141] libmachine: STDERR: 
	I0731 12:37:31.229791    9417 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2 +20000M
	I0731 12:37:31.237634    9417 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:31.237650    9417 main.go:141] libmachine: STDERR: 
	I0731 12:37:31.237661    9417 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2
	I0731 12:37:31.237664    9417 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:31.237676    9417 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:31.237702    9417 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:e8:b6:f7:6c:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/auto-452000/disk.qcow2
	I0731 12:37:31.239263    9417 main.go:141] libmachine: STDOUT: 
	I0731 12:37:31.239280    9417 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:31.239292    9417 client.go:171] duration metric: took 309.971958ms to LocalClient.Create
	I0731 12:37:33.241426    9417 start.go:128] duration metric: took 2.380168333s to createHost
	I0731 12:37:33.241617    9417 start.go:83] releasing machines lock for "auto-452000", held for 2.38070575s
	W0731 12:37:33.241954    9417 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:33.256588    9417 out.go:177] 
	W0731 12:37:33.258441    9417 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:37:33.258465    9417 out.go:239] * 
	* 
	W0731 12:37:33.260882    9417 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:37:33.274545    9417 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.960516417s)

                                                
                                                
-- stdout --
	* [flannel-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-452000" primary control-plane node in "flannel-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:37:35.500570    9526 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:37:35.500929    9526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:35.500933    9526 out.go:304] Setting ErrFile to fd 2...
	I0731 12:37:35.500936    9526 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:35.501128    9526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:37:35.502535    9526 out.go:298] Setting JSON to false
	I0731 12:37:35.518917    9526 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5818,"bootTime":1722448837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:37:35.518997    9526 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:37:35.525888    9526 out.go:177] * [flannel-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:37:35.533811    9526 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:37:35.533858    9526 notify.go:220] Checking for updates...
	I0731 12:37:35.540766    9526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:37:35.543817    9526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:37:35.546843    9526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:37:35.549760    9526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:37:35.552786    9526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:37:35.556172    9526 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:35.556244    9526 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:35.556298    9526 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:37:35.559751    9526 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:37:35.566816    9526 start.go:297] selected driver: qemu2
	I0731 12:37:35.566824    9526 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:37:35.566831    9526 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:37:35.569264    9526 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:37:35.570745    9526 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:37:35.573923    9526 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:37:35.573963    9526 cni.go:84] Creating CNI manager for "flannel"
	I0731 12:37:35.573971    9526 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0731 12:37:35.574003    9526 start.go:340] cluster config:
	{Name:flannel-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:37:35.577688    9526 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:37:35.584779    9526 out.go:177] * Starting "flannel-452000" primary control-plane node in "flannel-452000" cluster
	I0731 12:37:35.588815    9526 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:37:35.588835    9526 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:37:35.588847    9526 cache.go:56] Caching tarball of preloaded images
	I0731 12:37:35.588918    9526 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:37:35.588924    9526 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:37:35.588990    9526 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/flannel-452000/config.json ...
	I0731 12:37:35.589001    9526 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/flannel-452000/config.json: {Name:mke68bb1c79fdd47aa3c771335c067407b49c00f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:37:35.589224    9526 start.go:360] acquireMachinesLock for flannel-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:35.589257    9526 start.go:364] duration metric: took 27.791µs to acquireMachinesLock for "flannel-452000"
	I0731 12:37:35.589268    9526 start.go:93] Provisioning new machine with config: &{Name:flannel-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:35.589306    9526 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:35.596838    9526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:37:35.614622    9526 start.go:159] libmachine.API.Create for "flannel-452000" (driver="qemu2")
	I0731 12:37:35.614649    9526 client.go:168] LocalClient.Create starting
	I0731 12:37:35.614703    9526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:37:35.614734    9526 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:35.614744    9526 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:35.614781    9526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:37:35.614805    9526 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:35.614815    9526 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:35.615213    9526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:37:35.765840    9526 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:35.918353    9526 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:35.918359    9526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:35.918550    9526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2
	I0731 12:37:35.927854    9526 main.go:141] libmachine: STDOUT: 
	I0731 12:37:35.927869    9526 main.go:141] libmachine: STDERR: 
	I0731 12:37:35.927915    9526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2 +20000M
	I0731 12:37:35.935673    9526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:35.935695    9526 main.go:141] libmachine: STDERR: 
	I0731 12:37:35.935709    9526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2
	I0731 12:37:35.935713    9526 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:35.935725    9526 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:35.935759    9526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:17:17:eb:75:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2
	I0731 12:37:35.937307    9526 main.go:141] libmachine: STDOUT: 
	I0731 12:37:35.937321    9526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:35.937338    9526 client.go:171] duration metric: took 322.692125ms to LocalClient.Create
	I0731 12:37:37.939469    9526 start.go:128] duration metric: took 2.350187625s to createHost
	I0731 12:37:37.939572    9526 start.go:83] releasing machines lock for "flannel-452000", held for 2.350339125s
	W0731 12:37:37.939680    9526 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:37.950815    9526 out.go:177] * Deleting "flannel-452000" in qemu2 ...
	W0731 12:37:37.988213    9526 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:37.988243    9526 start.go:729] Will try again in 5 seconds ...
	I0731 12:37:42.990312    9526 start.go:360] acquireMachinesLock for flannel-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:42.990849    9526 start.go:364] duration metric: took 449.625µs to acquireMachinesLock for "flannel-452000"
	I0731 12:37:42.991014    9526 start.go:93] Provisioning new machine with config: &{Name:flannel-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:42.991362    9526 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:43.008928    9526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:37:43.062687    9526 start.go:159] libmachine.API.Create for "flannel-452000" (driver="qemu2")
	I0731 12:37:43.062737    9526 client.go:168] LocalClient.Create starting
	I0731 12:37:43.062848    9526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:37:43.062913    9526 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:43.062930    9526 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:43.063000    9526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:37:43.063044    9526 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:43.063078    9526 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:43.063628    9526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:37:43.229646    9526 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:43.363210    9526 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:43.363215    9526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:43.363421    9526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2
	I0731 12:37:43.372825    9526 main.go:141] libmachine: STDOUT: 
	I0731 12:37:43.372853    9526 main.go:141] libmachine: STDERR: 
	I0731 12:37:43.372900    9526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2 +20000M
	I0731 12:37:43.380622    9526 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:43.380634    9526 main.go:141] libmachine: STDERR: 
	I0731 12:37:43.380656    9526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2
	I0731 12:37:43.380661    9526 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:43.380673    9526 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:43.380707    9526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:e1:f8:e5:f2:d5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/flannel-452000/disk.qcow2
	I0731 12:37:43.382321    9526 main.go:141] libmachine: STDOUT: 
	I0731 12:37:43.382341    9526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:43.382361    9526 client.go:171] duration metric: took 319.624917ms to LocalClient.Create
	I0731 12:37:45.384496    9526 start.go:128] duration metric: took 2.393143958s to createHost
	I0731 12:37:45.384559    9526 start.go:83] releasing machines lock for "flannel-452000", held for 2.393710166s
	W0731 12:37:45.384907    9526 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:45.399667    9526 out.go:177] 
	W0731 12:37:45.403577    9526 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:37:45.403600    9526 out.go:239] * 
	* 
	W0731 12:37:45.406045    9526 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:37:45.419502    9526 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.866464375s)

                                                
                                                
-- stdout --
	* [kindnet-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-452000" primary control-plane node in "kindnet-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:37:47.808345    9646 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:37:47.808480    9646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:47.808484    9646 out.go:304] Setting ErrFile to fd 2...
	I0731 12:37:47.808486    9646 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:47.808603    9646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:37:47.809678    9646 out.go:298] Setting JSON to false
	I0731 12:37:47.825707    9646 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5830,"bootTime":1722448837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:37:47.825780    9646 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:37:47.832701    9646 out.go:177] * [kindnet-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:37:47.840903    9646 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:37:47.840957    9646 notify.go:220] Checking for updates...
	I0731 12:37:47.847871    9646 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:37:47.850874    9646 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:37:47.853868    9646 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:37:47.856866    9646 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:37:47.859889    9646 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:37:47.861511    9646 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:47.861586    9646 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:47.861631    9646 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:37:47.865811    9646 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:37:47.872754    9646 start.go:297] selected driver: qemu2
	I0731 12:37:47.872762    9646 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:37:47.872769    9646 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:37:47.874993    9646 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:37:47.877886    9646 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:37:47.880946    9646 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:37:47.880961    9646 cni.go:84] Creating CNI manager for "kindnet"
	I0731 12:37:47.880964    9646 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 12:37:47.880996    9646 start.go:340] cluster config:
	{Name:kindnet-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:37:47.884666    9646 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:37:47.891865    9646 out.go:177] * Starting "kindnet-452000" primary control-plane node in "kindnet-452000" cluster
	I0731 12:37:47.895867    9646 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:37:47.895882    9646 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:37:47.895893    9646 cache.go:56] Caching tarball of preloaded images
	I0731 12:37:47.895956    9646 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:37:47.895962    9646 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:37:47.896029    9646 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/kindnet-452000/config.json ...
	I0731 12:37:47.896042    9646 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/kindnet-452000/config.json: {Name:mk12271a76e61d7a826ec053efc17bf7bd9157e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:37:47.896391    9646 start.go:360] acquireMachinesLock for kindnet-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:47.896425    9646 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "kindnet-452000"
	I0731 12:37:47.896440    9646 start.go:93] Provisioning new machine with config: &{Name:kindnet-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:47.896486    9646 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:47.900865    9646 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:37:47.917824    9646 start.go:159] libmachine.API.Create for "kindnet-452000" (driver="qemu2")
	I0731 12:37:47.917861    9646 client.go:168] LocalClient.Create starting
	I0731 12:37:47.917915    9646 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:37:47.917947    9646 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:47.917956    9646 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:47.917997    9646 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:37:47.918019    9646 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:47.918033    9646 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:47.918480    9646 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:37:48.070387    9646 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:48.147838    9646 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:48.147844    9646 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:48.148027    9646 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2
	I0731 12:37:48.157078    9646 main.go:141] libmachine: STDOUT: 
	I0731 12:37:48.157169    9646 main.go:141] libmachine: STDERR: 
	I0731 12:37:48.157212    9646 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2 +20000M
	I0731 12:37:48.165001    9646 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:48.165090    9646 main.go:141] libmachine: STDERR: 
	I0731 12:37:48.165104    9646 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2
	I0731 12:37:48.165108    9646 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:48.165120    9646 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:48.165145    9646 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f2:d1:75:79:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2
	I0731 12:37:48.166790    9646 main.go:141] libmachine: STDOUT: 
	I0731 12:37:48.166804    9646 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:48.166823    9646 client.go:171] duration metric: took 248.962334ms to LocalClient.Create
	I0731 12:37:50.168955    9646 start.go:128] duration metric: took 2.272494s to createHost
	I0731 12:37:50.169079    9646 start.go:83] releasing machines lock for "kindnet-452000", held for 2.272638417s
	W0731 12:37:50.169142    9646 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:50.180314    9646 out.go:177] * Deleting "kindnet-452000" in qemu2 ...
	W0731 12:37:50.218966    9646 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:50.218990    9646 start.go:729] Will try again in 5 seconds ...
	I0731 12:37:55.221189    9646 start.go:360] acquireMachinesLock for kindnet-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:37:55.221694    9646 start.go:364] duration metric: took 387.75µs to acquireMachinesLock for "kindnet-452000"
	I0731 12:37:55.221848    9646 start.go:93] Provisioning new machine with config: &{Name:kindnet-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:37:55.222259    9646 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:37:55.244161    9646 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:37:55.295126    9646 start.go:159] libmachine.API.Create for "kindnet-452000" (driver="qemu2")
	I0731 12:37:55.295188    9646 client.go:168] LocalClient.Create starting
	I0731 12:37:55.295320    9646 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:37:55.295381    9646 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:55.295398    9646 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:55.295471    9646 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:37:55.295519    9646 main.go:141] libmachine: Decoding PEM data...
	I0731 12:37:55.295536    9646 main.go:141] libmachine: Parsing certificate...
	I0731 12:37:55.296202    9646 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:37:55.458593    9646 main.go:141] libmachine: Creating SSH key...
	I0731 12:37:55.580306    9646 main.go:141] libmachine: Creating Disk image...
	I0731 12:37:55.580317    9646 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:37:55.580512    9646 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2
	I0731 12:37:55.589926    9646 main.go:141] libmachine: STDOUT: 
	I0731 12:37:55.589943    9646 main.go:141] libmachine: STDERR: 
	I0731 12:37:55.589992    9646 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2 +20000M
	I0731 12:37:55.597797    9646 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:37:55.597811    9646 main.go:141] libmachine: STDERR: 
	I0731 12:37:55.597831    9646 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2
	I0731 12:37:55.597836    9646 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:37:55.597848    9646 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:37:55.597896    9646 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:1a:56:fd:9b:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kindnet-452000/disk.qcow2
	I0731 12:37:55.599529    9646 main.go:141] libmachine: STDOUT: 
	I0731 12:37:55.599546    9646 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:37:55.599557    9646 client.go:171] duration metric: took 304.370041ms to LocalClient.Create
	I0731 12:37:57.601694    9646 start.go:128] duration metric: took 2.37945625s to createHost
	I0731 12:37:57.601765    9646 start.go:83] releasing machines lock for "kindnet-452000", held for 2.380095583s
	W0731 12:37:57.602156    9646 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:37:57.611590    9646 out.go:177] 
	W0731 12:37:57.620829    9646 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:37:57.620870    9646 out.go:239] * 
	* 
	W0731 12:37:57.623230    9646 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:37:57.633711    9646 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.815036542s)

                                                
                                                
-- stdout --
	* [enable-default-cni-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-452000" primary control-plane node in "enable-default-cni-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:37:59.938390    9760 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:37:59.938527    9760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:59.938530    9760 out.go:304] Setting ErrFile to fd 2...
	I0731 12:37:59.938533    9760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:37:59.938655    9760 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:37:59.939667    9760 out.go:298] Setting JSON to false
	I0731 12:37:59.955879    9760 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5842,"bootTime":1722448837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:37:59.955940    9760 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:37:59.962179    9760 out.go:177] * [enable-default-cni-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:37:59.970120    9760 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:37:59.970188    9760 notify.go:220] Checking for updates...
	I0731 12:37:59.977119    9760 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:37:59.980097    9760 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:37:59.983080    9760 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:37:59.986104    9760 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:37:59.989063    9760 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:37:59.992407    9760 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:59.992478    9760 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:37:59.992523    9760 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:37:59.997107    9760 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:38:00.004043    9760 start.go:297] selected driver: qemu2
	I0731 12:38:00.004048    9760 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:38:00.004054    9760 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:38:00.006346    9760 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:38:00.009179    9760 out.go:177] * Automatically selected the socket_vmnet network
	E0731 12:38:00.012144    9760 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0731 12:38:00.012156    9760 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:38:00.012172    9760 cni.go:84] Creating CNI manager for "bridge"
	I0731 12:38:00.012175    9760 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:38:00.012216    9760 start.go:340] cluster config:
	{Name:enable-default-cni-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:38:00.015868    9760 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:38:00.023094    9760 out.go:177] * Starting "enable-default-cni-452000" primary control-plane node in "enable-default-cni-452000" cluster
	I0731 12:38:00.027058    9760 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:38:00.027076    9760 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:38:00.027091    9760 cache.go:56] Caching tarball of preloaded images
	I0731 12:38:00.027156    9760 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:38:00.027171    9760 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:38:00.027250    9760 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/enable-default-cni-452000/config.json ...
	I0731 12:38:00.027266    9760 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/enable-default-cni-452000/config.json: {Name:mk1bb6ea64ba3eee23f76efd54ff6336d75136df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:38:00.027608    9760 start.go:360] acquireMachinesLock for enable-default-cni-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:00.027645    9760 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "enable-default-cni-452000"
	I0731 12:38:00.027656    9760 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:00.027687    9760 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:00.031953    9760 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:00.049063    9760 start.go:159] libmachine.API.Create for "enable-default-cni-452000" (driver="qemu2")
	I0731 12:38:00.049088    9760 client.go:168] LocalClient.Create starting
	I0731 12:38:00.049144    9760 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:38:00.049175    9760 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:00.049182    9760 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:00.049215    9760 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:38:00.049237    9760 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:00.049244    9760 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:00.049698    9760 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:38:00.200788    9760 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:00.255680    9760 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:00.255685    9760 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:00.255871    9760 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2
	I0731 12:38:00.264937    9760 main.go:141] libmachine: STDOUT: 
	I0731 12:38:00.264954    9760 main.go:141] libmachine: STDERR: 
	I0731 12:38:00.265007    9760 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2 +20000M
	I0731 12:38:00.272812    9760 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:00.272823    9760 main.go:141] libmachine: STDERR: 
	I0731 12:38:00.272837    9760 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2
	I0731 12:38:00.272841    9760 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:00.272854    9760 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:00.272877    9760 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:d4:7a:39:67:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2
	I0731 12:38:00.274491    9760 main.go:141] libmachine: STDOUT: 
	I0731 12:38:00.274511    9760 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:00.274531    9760 client.go:171] duration metric: took 225.445208ms to LocalClient.Create
	I0731 12:38:02.276722    9760 start.go:128] duration metric: took 2.249056708s to createHost
	I0731 12:38:02.276777    9760 start.go:83] releasing machines lock for "enable-default-cni-452000", held for 2.249169584s
	W0731 12:38:02.276844    9760 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:02.286936    9760 out.go:177] * Deleting "enable-default-cni-452000" in qemu2 ...
	W0731 12:38:02.321488    9760 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:02.321513    9760 start.go:729] Will try again in 5 seconds ...
	I0731 12:38:07.323440    9760 start.go:360] acquireMachinesLock for enable-default-cni-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:07.323980    9760 start.go:364] duration metric: took 419.125µs to acquireMachinesLock for "enable-default-cni-452000"
	I0731 12:38:07.324113    9760 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:07.324368    9760 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:07.334081    9760 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:07.385493    9760 start.go:159] libmachine.API.Create for "enable-default-cni-452000" (driver="qemu2")
	I0731 12:38:07.385547    9760 client.go:168] LocalClient.Create starting
	I0731 12:38:07.385667    9760 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:38:07.385730    9760 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:07.385746    9760 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:07.385811    9760 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:38:07.385854    9760 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:07.385868    9760 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:07.386385    9760 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:38:07.551650    9760 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:07.655458    9760 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:07.655464    9760 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:07.655649    9760 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2
	I0731 12:38:07.665269    9760 main.go:141] libmachine: STDOUT: 
	I0731 12:38:07.665284    9760 main.go:141] libmachine: STDERR: 
	I0731 12:38:07.665339    9760 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2 +20000M
	I0731 12:38:07.673158    9760 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:07.673169    9760 main.go:141] libmachine: STDERR: 
	I0731 12:38:07.673181    9760 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2
	I0731 12:38:07.673191    9760 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:07.673203    9760 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:07.673234    9760 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:d9:1b:94:3e:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/enable-default-cni-452000/disk.qcow2
	I0731 12:38:07.674859    9760 main.go:141] libmachine: STDOUT: 
	I0731 12:38:07.674872    9760 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:07.674883    9760 client.go:171] duration metric: took 289.336ms to LocalClient.Create
	I0731 12:38:09.677058    9760 start.go:128] duration metric: took 2.3527015s to createHost
	I0731 12:38:09.677136    9760 start.go:83] releasing machines lock for "enable-default-cni-452000", held for 2.353180042s
	W0731 12:38:09.677583    9760 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:09.692309    9760 out.go:177] 
	W0731 12:38:09.696423    9760 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:38:09.696449    9760 out.go:239] * 
	* 
	W0731 12:38:09.698855    9760 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:38:09.712257    9760 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.789584042s)

                                                
                                                
-- stdout --
	* [bridge-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-452000" primary control-plane node in "bridge-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:38:11.924149    9875 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:38:11.924262    9875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:11.924265    9875 out.go:304] Setting ErrFile to fd 2...
	I0731 12:38:11.924267    9875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:11.924407    9875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:38:11.925434    9875 out.go:298] Setting JSON to false
	I0731 12:38:11.941403    9875 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5854,"bootTime":1722448837,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:38:11.941465    9875 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:38:11.947215    9875 out.go:177] * [bridge-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:38:11.955118    9875 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:38:11.955166    9875 notify.go:220] Checking for updates...
	I0731 12:38:11.962078    9875 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:38:11.965114    9875 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:38:11.968105    9875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:38:11.971046    9875 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:38:11.974104    9875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:38:11.977515    9875 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:11.977583    9875 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:11.977638    9875 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:38:11.982032    9875 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:38:11.989149    9875 start.go:297] selected driver: qemu2
	I0731 12:38:11.989155    9875 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:38:11.989162    9875 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:38:11.991405    9875 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:38:11.994145    9875 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:38:11.997176    9875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:38:11.997236    9875 cni.go:84] Creating CNI manager for "bridge"
	I0731 12:38:11.997241    9875 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:38:11.997275    9875 start.go:340] cluster config:
	{Name:bridge-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:38:12.001066    9875 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:38:12.008108    9875 out.go:177] * Starting "bridge-452000" primary control-plane node in "bridge-452000" cluster
	I0731 12:38:12.012124    9875 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:38:12.012139    9875 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:38:12.012149    9875 cache.go:56] Caching tarball of preloaded images
	I0731 12:38:12.012210    9875 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:38:12.012218    9875 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:38:12.012271    9875 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/bridge-452000/config.json ...
	I0731 12:38:12.012282    9875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/bridge-452000/config.json: {Name:mk200d8d8202950551e2dac888aa5b54388e8de5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:38:12.012509    9875 start.go:360] acquireMachinesLock for bridge-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:12.012545    9875 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "bridge-452000"
	I0731 12:38:12.012557    9875 start.go:93] Provisioning new machine with config: &{Name:bridge-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:12.012586    9875 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:12.021103    9875 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:12.039269    9875 start.go:159] libmachine.API.Create for "bridge-452000" (driver="qemu2")
	I0731 12:38:12.039289    9875 client.go:168] LocalClient.Create starting
	I0731 12:38:12.039346    9875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:38:12.039377    9875 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:12.039388    9875 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:12.039429    9875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:38:12.039453    9875 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:12.039463    9875 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:12.039882    9875 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:38:12.191785    9875 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:12.253435    9875 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:12.253440    9875 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:12.253619    9875 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2
	I0731 12:38:12.262752    9875 main.go:141] libmachine: STDOUT: 
	I0731 12:38:12.262768    9875 main.go:141] libmachine: STDERR: 
	I0731 12:38:12.262820    9875 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2 +20000M
	I0731 12:38:12.270602    9875 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:12.270617    9875 main.go:141] libmachine: STDERR: 
	I0731 12:38:12.270631    9875 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2
	I0731 12:38:12.270633    9875 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:12.270646    9875 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:12.270670    9875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:c4:00:12:e1:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2
	I0731 12:38:12.272335    9875 main.go:141] libmachine: STDOUT: 
	I0731 12:38:12.272351    9875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:12.272369    9875 client.go:171] duration metric: took 233.080166ms to LocalClient.Create
	I0731 12:38:14.274507    9875 start.go:128] duration metric: took 2.261949916s to createHost
	I0731 12:38:14.274563    9875 start.go:83] releasing machines lock for "bridge-452000", held for 2.2620565s
	W0731 12:38:14.274630    9875 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:14.289745    9875 out.go:177] * Deleting "bridge-452000" in qemu2 ...
	W0731 12:38:14.323561    9875 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:14.323583    9875 start.go:729] Will try again in 5 seconds ...
	I0731 12:38:19.323914    9875 start.go:360] acquireMachinesLock for bridge-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:19.324463    9875 start.go:364] duration metric: took 442.75µs to acquireMachinesLock for "bridge-452000"
	I0731 12:38:19.324685    9875 start.go:93] Provisioning new machine with config: &{Name:bridge-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:19.325014    9875 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:19.340541    9875 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:19.391948    9875 start.go:159] libmachine.API.Create for "bridge-452000" (driver="qemu2")
	I0731 12:38:19.392004    9875 client.go:168] LocalClient.Create starting
	I0731 12:38:19.392136    9875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:38:19.392199    9875 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:19.392223    9875 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:19.392285    9875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:38:19.392329    9875 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:19.392342    9875 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:19.392884    9875 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:38:19.554148    9875 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:19.618796    9875 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:19.618801    9875 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:19.618983    9875 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2
	I0731 12:38:19.628229    9875 main.go:141] libmachine: STDOUT: 
	I0731 12:38:19.628245    9875 main.go:141] libmachine: STDERR: 
	I0731 12:38:19.628286    9875 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2 +20000M
	I0731 12:38:19.636081    9875 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:19.636097    9875 main.go:141] libmachine: STDERR: 
	I0731 12:38:19.636110    9875 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2
	I0731 12:38:19.636113    9875 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:19.636119    9875 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:19.636151    9875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:0c:cc:94:b4:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/bridge-452000/disk.qcow2
	I0731 12:38:19.637846    9875 main.go:141] libmachine: STDOUT: 
	I0731 12:38:19.637860    9875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:19.637872    9875 client.go:171] duration metric: took 245.868167ms to LocalClient.Create
	I0731 12:38:21.640003    9875 start.go:128] duration metric: took 2.315011333s to createHost
	I0731 12:38:21.640054    9875 start.go:83] releasing machines lock for "bridge-452000", held for 2.315553042s
	W0731 12:38:21.640501    9875 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:21.650094    9875 out.go:177] 
	W0731 12:38:21.658181    9875 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:38:21.658213    9875 out.go:239] * 
	* 
	W0731 12:38:21.660863    9875 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:38:21.671081    9875 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.836293833s)

                                                
                                                
-- stdout --
	* [kubenet-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-452000" primary control-plane node in "kubenet-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:38:23.815406    9984 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:38:23.815535    9984 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:23.815538    9984 out.go:304] Setting ErrFile to fd 2...
	I0731 12:38:23.815541    9984 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:23.815659    9984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:38:23.816715    9984 out.go:298] Setting JSON to false
	I0731 12:38:23.832831    9984 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5866,"bootTime":1722448837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:38:23.832906    9984 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:38:23.839889    9984 out.go:177] * [kubenet-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:38:23.846805    9984 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:38:23.846863    9984 notify.go:220] Checking for updates...
	I0731 12:38:23.852856    9984 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:38:23.855868    9984 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:38:23.858931    9984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:38:23.861899    9984 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:38:23.864899    9984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:38:23.868177    9984 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:23.868250    9984 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:23.868300    9984 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:38:23.872881    9984 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:38:23.879891    9984 start.go:297] selected driver: qemu2
	I0731 12:38:23.879897    9984 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:38:23.879904    9984 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:38:23.882213    9984 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:38:23.884873    9984 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:38:23.886026    9984 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:38:23.886041    9984 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0731 12:38:23.886061    9984 start.go:340] cluster config:
	{Name:kubenet-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:38:23.889587    9984 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:38:23.896903    9984 out.go:177] * Starting "kubenet-452000" primary control-plane node in "kubenet-452000" cluster
	I0731 12:38:23.900834    9984 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:38:23.900850    9984 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:38:23.900861    9984 cache.go:56] Caching tarball of preloaded images
	I0731 12:38:23.900925    9984 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:38:23.900933    9984 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:38:23.900995    9984 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/kubenet-452000/config.json ...
	I0731 12:38:23.901007    9984 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/kubenet-452000/config.json: {Name:mk792c5d96e39a6dbdaad16a729f25a06dc0c543 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:38:23.901339    9984 start.go:360] acquireMachinesLock for kubenet-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:23.901370    9984 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "kubenet-452000"
	I0731 12:38:23.901380    9984 start.go:93] Provisioning new machine with config: &{Name:kubenet-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:23.901404    9984 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:23.904955    9984 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:23.921349    9984 start.go:159] libmachine.API.Create for "kubenet-452000" (driver="qemu2")
	I0731 12:38:23.921373    9984 client.go:168] LocalClient.Create starting
	I0731 12:38:23.921427    9984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:38:23.921464    9984 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:23.921472    9984 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:23.921516    9984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:38:23.921537    9984 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:23.921550    9984 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:23.922021    9984 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:38:24.073141    9984 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:24.141499    9984 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:24.141504    9984 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:24.141694    9984 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2
	I0731 12:38:24.150736    9984 main.go:141] libmachine: STDOUT: 
	I0731 12:38:24.150752    9984 main.go:141] libmachine: STDERR: 
	I0731 12:38:24.150804    9984 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2 +20000M
	I0731 12:38:24.158600    9984 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:24.158613    9984 main.go:141] libmachine: STDERR: 
	I0731 12:38:24.158646    9984 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2
	I0731 12:38:24.158650    9984 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:24.158663    9984 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:24.158692    9984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:73:83:fd:b7:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2
	I0731 12:38:24.160344    9984 main.go:141] libmachine: STDOUT: 
	I0731 12:38:24.160358    9984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:24.160374    9984 client.go:171] duration metric: took 239.003208ms to LocalClient.Create
	I0731 12:38:26.162501    9984 start.go:128] duration metric: took 2.261123584s to createHost
	I0731 12:38:26.162545    9984 start.go:83] releasing machines lock for "kubenet-452000", held for 2.261215958s
	W0731 12:38:26.162623    9984 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:26.173496    9984 out.go:177] * Deleting "kubenet-452000" in qemu2 ...
	W0731 12:38:26.210978    9984 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:26.211009    9984 start.go:729] Will try again in 5 seconds ...
	I0731 12:38:31.213057    9984 start.go:360] acquireMachinesLock for kubenet-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:31.213446    9984 start.go:364] duration metric: took 321.875µs to acquireMachinesLock for "kubenet-452000"
	I0731 12:38:31.213545    9984 start.go:93] Provisioning new machine with config: &{Name:kubenet-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:31.213858    9984 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:31.229582    9984 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:31.283283    9984 start.go:159] libmachine.API.Create for "kubenet-452000" (driver="qemu2")
	I0731 12:38:31.283337    9984 client.go:168] LocalClient.Create starting
	I0731 12:38:31.283451    9984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:38:31.283507    9984 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:31.283522    9984 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:31.283596    9984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:38:31.283639    9984 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:31.283659    9984 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:31.284274    9984 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:38:31.446480    9984 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:31.560311    9984 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:31.560316    9984 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:31.560501    9984 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2
	I0731 12:38:31.569743    9984 main.go:141] libmachine: STDOUT: 
	I0731 12:38:31.569763    9984 main.go:141] libmachine: STDERR: 
	I0731 12:38:31.569814    9984 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2 +20000M
	I0731 12:38:31.577659    9984 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:31.577680    9984 main.go:141] libmachine: STDERR: 
	I0731 12:38:31.577699    9984 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2
	I0731 12:38:31.577703    9984 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:31.577715    9984 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:31.577741    9984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:f3:ff:0a:1a:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/kubenet-452000/disk.qcow2
	I0731 12:38:31.579368    9984 main.go:141] libmachine: STDOUT: 
	I0731 12:38:31.579385    9984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:31.579396    9984 client.go:171] duration metric: took 296.062291ms to LocalClient.Create
	I0731 12:38:33.581526    9984 start.go:128] duration metric: took 2.367661708s to createHost
	I0731 12:38:33.581585    9984 start.go:83] releasing machines lock for "kubenet-452000", held for 2.368167333s
	W0731 12:38:33.582012    9984 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:33.592638    9984 out.go:177] 
	W0731 12:38:33.596683    9984 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:38:33.596721    9984 out.go:239] * 
	* 
	W0731 12:38:33.599564    9984 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:38:33.609540    9984 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.804633083s)

                                                
                                                
-- stdout --
	* [custom-flannel-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-452000" primary control-plane node in "custom-flannel-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:38:35.744038   10093 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:38:35.744185   10093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:35.744188   10093 out.go:304] Setting ErrFile to fd 2...
	I0731 12:38:35.744190   10093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:35.744314   10093 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:38:35.745487   10093 out.go:298] Setting JSON to false
	I0731 12:38:35.761419   10093 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5878,"bootTime":1722448837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:38:35.761483   10093 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:38:35.767264   10093 out.go:177] * [custom-flannel-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:38:35.774048   10093 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:38:35.774107   10093 notify.go:220] Checking for updates...
	I0731 12:38:35.781101   10093 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:38:35.784130   10093 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:38:35.787122   10093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:38:35.790099   10093 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:38:35.793100   10093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:38:35.796359   10093 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:35.796437   10093 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:35.796479   10093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:38:35.799985   10093 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:38:35.807071   10093 start.go:297] selected driver: qemu2
	I0731 12:38:35.807078   10093 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:38:35.807084   10093 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:38:35.809304   10093 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:38:35.810860   10093 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:38:35.814141   10093 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:38:35.814159   10093 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0731 12:38:35.814167   10093 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0731 12:38:35.814201   10093 start.go:340] cluster config:
	{Name:custom-flannel-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:38:35.817966   10093 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:38:35.826062   10093 out.go:177] * Starting "custom-flannel-452000" primary control-plane node in "custom-flannel-452000" cluster
	I0731 12:38:35.830062   10093 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:38:35.830080   10093 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:38:35.830094   10093 cache.go:56] Caching tarball of preloaded images
	I0731 12:38:35.830160   10093 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:38:35.830167   10093 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:38:35.830250   10093 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/custom-flannel-452000/config.json ...
	I0731 12:38:35.830262   10093 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/custom-flannel-452000/config.json: {Name:mkf9fe63be633b269362c65a97fc94fb43381e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:38:35.830481   10093 start.go:360] acquireMachinesLock for custom-flannel-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:35.830517   10093 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "custom-flannel-452000"
	I0731 12:38:35.830528   10093 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:35.830558   10093 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:35.838093   10093 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:35.855558   10093 start.go:159] libmachine.API.Create for "custom-flannel-452000" (driver="qemu2")
	I0731 12:38:35.855594   10093 client.go:168] LocalClient.Create starting
	I0731 12:38:35.855657   10093 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:38:35.855693   10093 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:35.855703   10093 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:35.855744   10093 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:38:35.855767   10093 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:35.855773   10093 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:35.856145   10093 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:38:36.024968   10093 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:36.115991   10093 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:36.116001   10093 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:36.116180   10093 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2
	I0731 12:38:36.125289   10093 main.go:141] libmachine: STDOUT: 
	I0731 12:38:36.125395   10093 main.go:141] libmachine: STDERR: 
	I0731 12:38:36.125436   10093 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2 +20000M
	I0731 12:38:36.133390   10093 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:36.133525   10093 main.go:141] libmachine: STDERR: 
	I0731 12:38:36.133551   10093 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2
	I0731 12:38:36.133556   10093 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:36.133565   10093 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:36.133589   10093 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:f6:74:13:fe:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2
	I0731 12:38:36.135197   10093 main.go:141] libmachine: STDOUT: 
	I0731 12:38:36.135265   10093 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:36.135288   10093 client.go:171] duration metric: took 279.695292ms to LocalClient.Create
	I0731 12:38:38.137452   10093 start.go:128] duration metric: took 2.306922667s to createHost
	I0731 12:38:38.137514   10093 start.go:83] releasing machines lock for "custom-flannel-452000", held for 2.307037041s
	W0731 12:38:38.137620   10093 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:38.146772   10093 out.go:177] * Deleting "custom-flannel-452000" in qemu2 ...
	W0731 12:38:38.183016   10093 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:38.183045   10093 start.go:729] Will try again in 5 seconds ...
	I0731 12:38:43.184506   10093 start.go:360] acquireMachinesLock for custom-flannel-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:43.185059   10093 start.go:364] duration metric: took 445.542µs to acquireMachinesLock for "custom-flannel-452000"
	I0731 12:38:43.185201   10093 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:43.185529   10093 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:43.203198   10093 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:43.255208   10093 start.go:159] libmachine.API.Create for "custom-flannel-452000" (driver="qemu2")
	I0731 12:38:43.255252   10093 client.go:168] LocalClient.Create starting
	I0731 12:38:43.255376   10093 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:38:43.255437   10093 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:43.255453   10093 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:43.255512   10093 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:38:43.255557   10093 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:43.255598   10093 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:43.256268   10093 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:38:43.418083   10093 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:43.451764   10093 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:43.451771   10093 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:43.451948   10093 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2
	I0731 12:38:43.460976   10093 main.go:141] libmachine: STDOUT: 
	I0731 12:38:43.461079   10093 main.go:141] libmachine: STDERR: 
	I0731 12:38:43.461127   10093 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2 +20000M
	I0731 12:38:43.469103   10093 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:43.469118   10093 main.go:141] libmachine: STDERR: 
	I0731 12:38:43.469131   10093 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2
	I0731 12:38:43.469135   10093 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:43.469144   10093 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:43.469183   10093 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:43:90:7c:bb:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/custom-flannel-452000/disk.qcow2
	I0731 12:38:43.470802   10093 main.go:141] libmachine: STDOUT: 
	I0731 12:38:43.470816   10093 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:43.470828   10093 client.go:171] duration metric: took 215.5745ms to LocalClient.Create
	I0731 12:38:45.472962   10093 start.go:128] duration metric: took 2.287454125s to createHost
	I0731 12:38:45.473019   10093 start.go:83] releasing machines lock for "custom-flannel-452000", held for 2.287986583s
	W0731 12:38:45.473463   10093 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:45.488755   10093 out.go:177] 
	W0731 12:38:45.492712   10093 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:38:45.492754   10093 out.go:239] * 
	* 
	W0731 12:38:45.495158   10093 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:38:45.507512   10093 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.932814292s)

                                                
                                                
-- stdout --
	* [calico-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-452000" primary control-plane node in "calico-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:38:47.825952   10216 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:38:47.826072   10216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:47.826075   10216 out.go:304] Setting ErrFile to fd 2...
	I0731 12:38:47.826077   10216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:38:47.826197   10216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:38:47.827225   10216 out.go:298] Setting JSON to false
	I0731 12:38:47.843517   10216 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5890,"bootTime":1722448837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:38:47.843580   10216 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:38:47.849132   10216 out.go:177] * [calico-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:38:47.858007   10216 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:38:47.858052   10216 notify.go:220] Checking for updates...
	I0731 12:38:47.865934   10216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:38:47.868974   10216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:38:47.871981   10216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:38:47.875005   10216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:38:47.878005   10216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:38:47.881253   10216 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:47.881322   10216 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:38:47.881372   10216 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:38:47.886013   10216 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:38:47.895010   10216 start.go:297] selected driver: qemu2
	I0731 12:38:47.895018   10216 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:38:47.895025   10216 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:38:47.897356   10216 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:38:47.901039   10216 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:38:47.904167   10216 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:38:47.904183   10216 cni.go:84] Creating CNI manager for "calico"
	I0731 12:38:47.904187   10216 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0731 12:38:47.904214   10216 start.go:340] cluster config:
	{Name:calico-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:38:47.908029   10216 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:38:47.915009   10216 out.go:177] * Starting "calico-452000" primary control-plane node in "calico-452000" cluster
	I0731 12:38:47.918984   10216 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:38:47.918998   10216 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:38:47.919008   10216 cache.go:56] Caching tarball of preloaded images
	I0731 12:38:47.919067   10216 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:38:47.919073   10216 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:38:47.919134   10216 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/calico-452000/config.json ...
	I0731 12:38:47.919145   10216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/calico-452000/config.json: {Name:mk00bc0c561620d164a56f5ecc8d76f493f70396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:38:47.919493   10216 start.go:360] acquireMachinesLock for calico-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:47.919529   10216 start.go:364] duration metric: took 30µs to acquireMachinesLock for "calico-452000"
	I0731 12:38:47.919540   10216 start.go:93] Provisioning new machine with config: &{Name:calico-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:47.919575   10216 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:47.924029   10216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:47.942814   10216 start.go:159] libmachine.API.Create for "calico-452000" (driver="qemu2")
	I0731 12:38:47.942851   10216 client.go:168] LocalClient.Create starting
	I0731 12:38:47.942914   10216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:38:47.942944   10216 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:47.942953   10216 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:47.942997   10216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:38:47.943021   10216 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:47.943031   10216 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:47.943433   10216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:38:48.095781   10216 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:48.175444   10216 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:48.175449   10216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:48.175628   10216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2
	I0731 12:38:48.184677   10216 main.go:141] libmachine: STDOUT: 
	I0731 12:38:48.184690   10216 main.go:141] libmachine: STDERR: 
	I0731 12:38:48.184746   10216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2 +20000M
	I0731 12:38:48.192544   10216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:48.192557   10216 main.go:141] libmachine: STDERR: 
	I0731 12:38:48.192580   10216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2
	I0731 12:38:48.192585   10216 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:48.192598   10216 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:48.192621   10216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:a3:96:8b:e9:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2
	I0731 12:38:48.194196   10216 main.go:141] libmachine: STDOUT: 
	I0731 12:38:48.194208   10216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:48.194225   10216 client.go:171] duration metric: took 251.375333ms to LocalClient.Create
	I0731 12:38:50.196350   10216 start.go:128] duration metric: took 2.276804875s to createHost
	I0731 12:38:50.196428   10216 start.go:83] releasing machines lock for "calico-452000", held for 2.276937583s
	W0731 12:38:50.196484   10216 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:50.213632   10216 out.go:177] * Deleting "calico-452000" in qemu2 ...
	W0731 12:38:50.243991   10216 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:50.244011   10216 start.go:729] Will try again in 5 seconds ...
	I0731 12:38:55.246235   10216 start.go:360] acquireMachinesLock for calico-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:38:55.246651   10216 start.go:364] duration metric: took 301.166µs to acquireMachinesLock for "calico-452000"
	I0731 12:38:55.246769   10216 start.go:93] Provisioning new machine with config: &{Name:calico-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:38:55.246999   10216 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:38:55.263928   10216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:38:55.314352   10216 start.go:159] libmachine.API.Create for "calico-452000" (driver="qemu2")
	I0731 12:38:55.314407   10216 client.go:168] LocalClient.Create starting
	I0731 12:38:55.314520   10216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:38:55.314583   10216 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:55.314602   10216 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:55.314672   10216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:38:55.314716   10216 main.go:141] libmachine: Decoding PEM data...
	I0731 12:38:55.314737   10216 main.go:141] libmachine: Parsing certificate...
	I0731 12:38:55.315287   10216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:38:55.476027   10216 main.go:141] libmachine: Creating SSH key...
	I0731 12:38:55.666667   10216 main.go:141] libmachine: Creating Disk image...
	I0731 12:38:55.666673   10216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:38:55.666872   10216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2
	I0731 12:38:55.676436   10216 main.go:141] libmachine: STDOUT: 
	I0731 12:38:55.676453   10216 main.go:141] libmachine: STDERR: 
	I0731 12:38:55.676515   10216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2 +20000M
	I0731 12:38:55.684475   10216 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:38:55.684490   10216 main.go:141] libmachine: STDERR: 
	I0731 12:38:55.684508   10216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2
	I0731 12:38:55.684514   10216 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:38:55.684526   10216 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:38:55.684552   10216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:43:84:f0:85:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/calico-452000/disk.qcow2
	I0731 12:38:55.686152   10216 main.go:141] libmachine: STDOUT: 
	I0731 12:38:55.686204   10216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:38:55.686218   10216 client.go:171] duration metric: took 371.814125ms to LocalClient.Create
	I0731 12:38:57.688348   10216 start.go:128] duration metric: took 2.441360375s to createHost
	I0731 12:38:57.688459   10216 start.go:83] releasing machines lock for "calico-452000", held for 2.441796792s
	W0731 12:38:57.688790   10216 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:38:57.699816   10216 out.go:177] 
	W0731 12:38:57.707513   10216 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:38:57.707535   10216 out.go:239] * 
	* 
	W0731 12:38:57.709403   10216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:38:57.718354   10216 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.935075291s)

                                                
                                                
-- stdout --
	* [false-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-452000" primary control-plane node in "false-452000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-452000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:00.135922   10336 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:00.136059   10336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:00.136062   10336 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:00.136065   10336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:00.136184   10336 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:39:00.137237   10336 out.go:298] Setting JSON to false
	I0731 12:39:00.153253   10336 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5903,"bootTime":1722448837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:39:00.153391   10336 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:39:00.160356   10336 out.go:177] * [false-452000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:39:00.168303   10336 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:39:00.168341   10336 notify.go:220] Checking for updates...
	I0731 12:39:00.174319   10336 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:39:00.177296   10336 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:39:00.180309   10336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:39:00.183240   10336 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:39:00.186233   10336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:39:00.189729   10336 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:00.189800   10336 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:00.189852   10336 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:39:00.194220   10336 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:39:00.201276   10336 start.go:297] selected driver: qemu2
	I0731 12:39:00.201281   10336 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:39:00.201287   10336 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:39:00.203541   10336 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:39:00.207331   10336 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:39:00.210354   10336 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:39:00.210377   10336 cni.go:84] Creating CNI manager for "false"
	I0731 12:39:00.210414   10336 start.go:340] cluster config:
	{Name:false-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:00.214110   10336 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:00.221273   10336 out.go:177] * Starting "false-452000" primary control-plane node in "false-452000" cluster
	I0731 12:39:00.225303   10336 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:39:00.225326   10336 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:39:00.225337   10336 cache.go:56] Caching tarball of preloaded images
	I0731 12:39:00.225394   10336 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:39:00.225400   10336 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:39:00.225469   10336 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/false-452000/config.json ...
	I0731 12:39:00.225480   10336 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/false-452000/config.json: {Name:mk59149f6870620f5f105059c506a0ce8c9c06c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:39:00.225712   10336 start.go:360] acquireMachinesLock for false-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:00.225750   10336 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "false-452000"
	I0731 12:39:00.225761   10336 start.go:93] Provisioning new machine with config: &{Name:false-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:00.225791   10336 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:00.234264   10336 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:00.252135   10336 start.go:159] libmachine.API.Create for "false-452000" (driver="qemu2")
	I0731 12:39:00.252165   10336 client.go:168] LocalClient.Create starting
	I0731 12:39:00.252239   10336 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:39:00.252268   10336 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:00.252278   10336 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:00.252319   10336 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:39:00.252343   10336 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:00.252352   10336 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:00.252695   10336 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:39:00.403918   10336 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:00.564479   10336 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:00.564488   10336 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:00.564697   10336 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2
	I0731 12:39:00.574614   10336 main.go:141] libmachine: STDOUT: 
	I0731 12:39:00.574639   10336 main.go:141] libmachine: STDERR: 
	I0731 12:39:00.574690   10336 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2 +20000M
	I0731 12:39:00.582517   10336 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:00.582530   10336 main.go:141] libmachine: STDERR: 
	I0731 12:39:00.582551   10336 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2
	I0731 12:39:00.582555   10336 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:00.582568   10336 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:00.582593   10336 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:26:01:22:56:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2
	I0731 12:39:00.584194   10336 main.go:141] libmachine: STDOUT: 
	I0731 12:39:00.584209   10336 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:00.584227   10336 client.go:171] duration metric: took 332.064334ms to LocalClient.Create
	I0731 12:39:02.586360   10336 start.go:128] duration metric: took 2.360599916s to createHost
	I0731 12:39:02.586414   10336 start.go:83] releasing machines lock for "false-452000", held for 2.3607055s
	W0731 12:39:02.586555   10336 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:02.596723   10336 out.go:177] * Deleting "false-452000" in qemu2 ...
	W0731 12:39:02.634658   10336 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:02.634689   10336 start.go:729] Will try again in 5 seconds ...
	I0731 12:39:07.636742   10336 start.go:360] acquireMachinesLock for false-452000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:07.637269   10336 start.go:364] duration metric: took 435.625µs to acquireMachinesLock for "false-452000"
	I0731 12:39:07.637387   10336 start.go:93] Provisioning new machine with config: &{Name:false-452000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:07.637731   10336 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:07.654481   10336 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:39:07.705089   10336 start.go:159] libmachine.API.Create for "false-452000" (driver="qemu2")
	I0731 12:39:07.705138   10336 client.go:168] LocalClient.Create starting
	I0731 12:39:07.705254   10336 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:39:07.705313   10336 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:07.705338   10336 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:07.705410   10336 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:39:07.705452   10336 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:07.705463   10336 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:07.705957   10336 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:39:07.867895   10336 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:07.975106   10336 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:07.975115   10336 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:07.975279   10336 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2
	I0731 12:39:07.984554   10336 main.go:141] libmachine: STDOUT: 
	I0731 12:39:07.984572   10336 main.go:141] libmachine: STDERR: 
	I0731 12:39:07.984618   10336 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2 +20000M
	I0731 12:39:07.992427   10336 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:07.992448   10336 main.go:141] libmachine: STDERR: 
	I0731 12:39:07.992459   10336 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2
	I0731 12:39:07.992464   10336 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:07.992481   10336 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:07.992547   10336 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:5d:46:7c:73:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/false-452000/disk.qcow2
	I0731 12:39:07.994155   10336 main.go:141] libmachine: STDOUT: 
	I0731 12:39:07.994171   10336 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:07.994184   10336 client.go:171] duration metric: took 289.046166ms to LocalClient.Create
	I0731 12:39:09.996318   10336 start.go:128] duration metric: took 2.358604417s to createHost
	I0731 12:39:09.996372   10336 start.go:83] releasing machines lock for "false-452000", held for 2.359129541s
	W0731 12:39:09.996756   10336 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-452000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:10.011404   10336 out.go:177] 
	W0731 12:39:10.015549   10336 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:10.015580   10336 out.go:239] * 
	* 
	W0731 12:39:10.018176   10336 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:39:10.028429   10336 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.769546125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-629000" primary control-plane node in "old-k8s-version-629000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-629000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:12.227749   10447 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:12.227879   10447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:12.227886   10447 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:12.227889   10447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:12.228014   10447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:39:12.229116   10447 out.go:298] Setting JSON to false
	I0731 12:39:12.244930   10447 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5915,"bootTime":1722448837,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:39:12.244995   10447 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:39:12.251686   10447 out.go:177] * [old-k8s-version-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:39:12.259662   10447 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:39:12.259717   10447 notify.go:220] Checking for updates...
	I0731 12:39:12.265731   10447 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:39:12.268653   10447 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:39:12.271709   10447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:39:12.274662   10447 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:39:12.277688   10447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:39:12.280968   10447 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:12.281033   10447 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:12.281089   10447 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:39:12.285545   10447 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:39:12.292692   10447 start.go:297] selected driver: qemu2
	I0731 12:39:12.292700   10447 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:39:12.292707   10447 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:39:12.295010   10447 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:39:12.297687   10447 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:39:12.300736   10447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:39:12.300766   10447 cni.go:84] Creating CNI manager for ""
	I0731 12:39:12.300772   10447 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:39:12.300801   10447 start.go:340] cluster config:
	{Name:old-k8s-version-629000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:12.304414   10447 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:12.311702   10447 out.go:177] * Starting "old-k8s-version-629000" primary control-plane node in "old-k8s-version-629000" cluster
	I0731 12:39:12.315719   10447 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:39:12.315736   10447 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:39:12.315751   10447 cache.go:56] Caching tarball of preloaded images
	I0731 12:39:12.315826   10447 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:39:12.315832   10447 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:39:12.315902   10447 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/old-k8s-version-629000/config.json ...
	I0731 12:39:12.315913   10447 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/old-k8s-version-629000/config.json: {Name:mk0816d8e56db0726bf5bed976f2a8205bf75747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:39:12.316263   10447 start.go:360] acquireMachinesLock for old-k8s-version-629000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:12.316300   10447 start.go:364] duration metric: took 29.541µs to acquireMachinesLock for "old-k8s-version-629000"
	I0731 12:39:12.316312   10447 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:12.316346   10447 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:12.324686   10447 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:39:12.343001   10447 start.go:159] libmachine.API.Create for "old-k8s-version-629000" (driver="qemu2")
	I0731 12:39:12.343031   10447 client.go:168] LocalClient.Create starting
	I0731 12:39:12.343092   10447 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:39:12.343123   10447 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:12.343133   10447 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:12.343173   10447 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:39:12.343197   10447 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:12.343204   10447 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:12.343614   10447 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:39:12.496526   10447 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:12.539546   10447 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:12.539551   10447 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:12.539727   10447 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0731 12:39:12.549008   10447 main.go:141] libmachine: STDOUT: 
	I0731 12:39:12.549022   10447 main.go:141] libmachine: STDERR: 
	I0731 12:39:12.549080   10447 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2 +20000M
	I0731 12:39:12.556924   10447 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:12.556940   10447 main.go:141] libmachine: STDERR: 
	I0731 12:39:12.556954   10447 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0731 12:39:12.556957   10447 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:12.556970   10447 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:12.556993   10447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a5:da:e9:ae:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0731 12:39:12.558612   10447 main.go:141] libmachine: STDOUT: 
	I0731 12:39:12.558623   10447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:12.558640   10447 client.go:171] duration metric: took 215.608375ms to LocalClient.Create
	I0731 12:39:14.560774   10447 start.go:128] duration metric: took 2.244454875s to createHost
	I0731 12:39:14.560833   10447 start.go:83] releasing machines lock for "old-k8s-version-629000", held for 2.244572625s
	W0731 12:39:14.560946   10447 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:14.572251   10447 out.go:177] * Deleting "old-k8s-version-629000" in qemu2 ...
	W0731 12:39:14.602120   10447 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:14.602144   10447 start.go:729] Will try again in 5 seconds ...
	I0731 12:39:19.604297   10447 start.go:360] acquireMachinesLock for old-k8s-version-629000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:19.604747   10447 start.go:364] duration metric: took 354.125µs to acquireMachinesLock for "old-k8s-version-629000"
	I0731 12:39:19.604874   10447 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:19.605184   10447 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:19.616755   10447 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:39:19.667077   10447 start.go:159] libmachine.API.Create for "old-k8s-version-629000" (driver="qemu2")
	I0731 12:39:19.667137   10447 client.go:168] LocalClient.Create starting
	I0731 12:39:19.667240   10447 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:39:19.667297   10447 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:19.667312   10447 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:19.667367   10447 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:39:19.667421   10447 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:19.667432   10447 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:19.668082   10447 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:39:19.831015   10447 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:19.900689   10447 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:19.900697   10447 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:19.900884   10447 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0731 12:39:19.910314   10447 main.go:141] libmachine: STDOUT: 
	I0731 12:39:19.910335   10447 main.go:141] libmachine: STDERR: 
	I0731 12:39:19.910387   10447 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2 +20000M
	I0731 12:39:19.918197   10447 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:19.918210   10447 main.go:141] libmachine: STDERR: 
	I0731 12:39:19.918236   10447 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0731 12:39:19.918242   10447 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:19.918254   10447 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:19.918278   10447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:93:49:d3:48:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0731 12:39:19.919941   10447 main.go:141] libmachine: STDOUT: 
	I0731 12:39:19.919955   10447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:19.919967   10447 client.go:171] duration metric: took 252.831166ms to LocalClient.Create
	I0731 12:39:21.922108   10447 start.go:128] duration metric: took 2.316934959s to createHost
	I0731 12:39:21.922155   10447 start.go:83] releasing machines lock for "old-k8s-version-629000", held for 2.317432958s
	W0731 12:39:21.922479   10447 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-629000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-629000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:21.931952   10447 out.go:177] 
	W0731 12:39:21.942216   10447 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:21.942255   10447 out.go:239] * 
	* 
	W0731 12:39:21.944695   10447 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:39:21.955121   10447 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (66.74ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-629000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-629000 create -f testdata/busybox.yaml: exit status 1 (29.983709ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-629000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-629000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (30.31075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (29.448875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-629000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-629000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-629000 describe deploy/metrics-server -n kube-system: exit status 1 (26.327417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-629000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-629000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (30.249ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.193692291s)

                                                
                                                
-- stdout --
	* [old-k8s-version-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-629000" primary control-plane node in "old-k8s-version-629000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:26.362800   10497 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:26.362917   10497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:26.362921   10497 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:26.362923   10497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:26.363049   10497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:39:26.364062   10497 out.go:298] Setting JSON to false
	I0731 12:39:26.379979   10497 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5929,"bootTime":1722448837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:39:26.380052   10497 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:39:26.384893   10497 out.go:177] * [old-k8s-version-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:39:26.392798   10497 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:39:26.392850   10497 notify.go:220] Checking for updates...
	I0731 12:39:26.400844   10497 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:39:26.403911   10497 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:39:26.406889   10497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:39:26.409928   10497 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:39:26.411319   10497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:39:26.415102   10497 config.go:182] Loaded profile config "old-k8s-version-629000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 12:39:26.417825   10497 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:39:26.423125   10497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:39:26.427930   10497 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:39:26.434901   10497 start.go:297] selected driver: qemu2
	I0731 12:39:26.434908   10497 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:26.434979   10497 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:39:26.437371   10497 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:39:26.437395   10497 cni.go:84] Creating CNI manager for ""
	I0731 12:39:26.437401   10497 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:39:26.437426   10497 start.go:340] cluster config:
	{Name:old-k8s-version-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-629000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:26.441051   10497 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:26.448912   10497 out.go:177] * Starting "old-k8s-version-629000" primary control-plane node in "old-k8s-version-629000" cluster
	I0731 12:39:26.452879   10497 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:39:26.452895   10497 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:39:26.452909   10497 cache.go:56] Caching tarball of preloaded images
	I0731 12:39:26.452984   10497 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:39:26.452990   10497 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:39:26.453065   10497 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/old-k8s-version-629000/config.json ...
	I0731 12:39:26.453559   10497 start.go:360] acquireMachinesLock for old-k8s-version-629000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:26.453591   10497 start.go:364] duration metric: took 24.792µs to acquireMachinesLock for "old-k8s-version-629000"
	I0731 12:39:26.453599   10497 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:39:26.453606   10497 fix.go:54] fixHost starting: 
	I0731 12:39:26.453719   10497 fix.go:112] recreateIfNeeded on old-k8s-version-629000: state=Stopped err=<nil>
	W0731 12:39:26.453728   10497 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:39:26.457901   10497 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-629000" ...
	I0731 12:39:26.465838   10497 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:26.465875   10497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:93:49:d3:48:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0731 12:39:26.467926   10497 main.go:141] libmachine: STDOUT: 
	I0731 12:39:26.467944   10497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:26.467980   10497 fix.go:56] duration metric: took 14.377333ms for fixHost
	I0731 12:39:26.467985   10497 start.go:83] releasing machines lock for "old-k8s-version-629000", held for 14.389208ms
	W0731 12:39:26.467993   10497 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:26.468034   10497 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:26.468039   10497 start.go:729] Will try again in 5 seconds ...
	I0731 12:39:31.470129   10497 start.go:360] acquireMachinesLock for old-k8s-version-629000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:31.470578   10497 start.go:364] duration metric: took 284.083µs to acquireMachinesLock for "old-k8s-version-629000"
	I0731 12:39:31.470689   10497 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:39:31.470710   10497 fix.go:54] fixHost starting: 
	I0731 12:39:31.471390   10497 fix.go:112] recreateIfNeeded on old-k8s-version-629000: state=Stopped err=<nil>
	W0731 12:39:31.471417   10497 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:39:31.479901   10497 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-629000" ...
	I0731 12:39:31.483961   10497 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:31.484234   10497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:93:49:d3:48:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/old-k8s-version-629000/disk.qcow2
	I0731 12:39:31.493228   10497 main.go:141] libmachine: STDOUT: 
	I0731 12:39:31.493312   10497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:31.493412   10497 fix.go:56] duration metric: took 22.706083ms for fixHost
	I0731 12:39:31.493429   10497 start.go:83] releasing machines lock for "old-k8s-version-629000", held for 22.826834ms
	W0731 12:39:31.493611   10497 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-629000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-629000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:31.501885   10497 out.go:177] 
	W0731 12:39:31.504888   10497 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:31.504917   10497 out.go:239] * 
	* 
	W0731 12:39:31.507427   10497 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:39:31.515901   10497 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-629000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (68.0915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-629000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (32.811625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-629000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-629000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-629000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.649709ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-629000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-629000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (29.814667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-629000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (29.525167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-629000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-629000 --alsologtostderr -v=1: exit status 83 (40.825042ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-629000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-629000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:31.787261   10516 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:31.787657   10516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:31.787661   10516 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:31.787663   10516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:31.787814   10516 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:39:31.788019   10516 out.go:298] Setting JSON to false
	I0731 12:39:31.788025   10516 mustload.go:65] Loading cluster: old-k8s-version-629000
	I0731 12:39:31.788211   10516 config.go:182] Loaded profile config "old-k8s-version-629000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 12:39:31.791795   10516 out.go:177] * The control-plane node old-k8s-version-629000 host is not running: state=Stopped
	I0731 12:39:31.795767   10516 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-629000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-629000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (30.297542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (29.422291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-592000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-592000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.968804542s)

                                                
                                                
-- stdout --
	* [no-preload-592000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-592000" primary control-plane node in "no-preload-592000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-592000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:32.107370   10533 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:32.107485   10533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:32.107488   10533 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:32.107490   10533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:32.107634   10533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:39:32.108688   10533 out.go:298] Setting JSON to false
	I0731 12:39:32.124854   10533 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5935,"bootTime":1722448837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:39:32.124917   10533 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:39:32.129803   10533 out.go:177] * [no-preload-592000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:39:32.135834   10533 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:39:32.135927   10533 notify.go:220] Checking for updates...
	I0731 12:39:32.142796   10533 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:39:32.145748   10533 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:39:32.148718   10533 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:39:32.151798   10533 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:39:32.154797   10533 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:39:32.158020   10533 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:32.158081   10533 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:32.158134   10533 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:39:32.162768   10533 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:39:32.168744   10533 start.go:297] selected driver: qemu2
	I0731 12:39:32.168749   10533 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:39:32.168766   10533 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:39:32.171164   10533 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:39:32.173719   10533 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:39:32.176846   10533 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:39:32.176886   10533 cni.go:84] Creating CNI manager for ""
	I0731 12:39:32.176894   10533 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:39:32.176899   10533 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:39:32.176922   10533 start.go:340] cluster config:
	{Name:no-preload-592000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:32.180748   10533 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:32.187889   10533 out.go:177] * Starting "no-preload-592000" primary control-plane node in "no-preload-592000" cluster
	I0731 12:39:32.191779   10533 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:39:32.191884   10533 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/no-preload-592000/config.json ...
	I0731 12:39:32.191907   10533 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/no-preload-592000/config.json: {Name:mk8b98b885417e3757fa903a24b3578fa70885e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:39:32.191907   10533 cache.go:107] acquiring lock: {Name:mk9272971e705a314fde8caf791b00ae5a154e1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:32.191911   10533 cache.go:107] acquiring lock: {Name:mkf771878d659eb5e064bcb0de049baf36f76397 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:32.191910   10533 cache.go:107] acquiring lock: {Name:mk081d2b1ffb72d5f869cb373ce00ca3de02a3fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:32.191967   10533 cache.go:115] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 12:39:32.191974   10533 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 68.084µs
	I0731 12:39:32.191984   10533 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 12:39:32.191996   10533 cache.go:107] acquiring lock: {Name:mkba2caf5bdb670ccb7acffd9ee75001d6c3ef7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:32.192076   10533 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 12:39:32.192090   10533 cache.go:107] acquiring lock: {Name:mkf6f5ca97a66123ce5cc70d761e717cf501951d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:32.192125   10533 cache.go:107] acquiring lock: {Name:mk52a9ded86938ba8826f1dd57a1b4754b12ba24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:32.192077   10533 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 12:39:32.192162   10533 cache.go:107] acquiring lock: {Name:mkeafa38309e571c49c78768f6282eb1153f40b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:32.192237   10533 start.go:360] acquireMachinesLock for no-preload-592000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:32.192240   10533 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 12:39:32.192265   10533 cache.go:107] acquiring lock: {Name:mk2a10cbd9b2c148721d1e8923dcb9bb26170be7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:32.192274   10533 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 12:39:32.192323   10533 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 12:39:32.192274   10533 start.go:364] duration metric: took 30.958µs to acquireMachinesLock for "no-preload-592000"
	I0731 12:39:32.192378   10533 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 12:39:32.192347   10533 start.go:93] Provisioning new machine with config: &{Name:no-preload-592000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:32.192405   10533 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:32.192481   10533 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 12:39:32.196839   10533 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:39:32.204552   10533 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 12:39:32.204657   10533 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 12:39:32.205228   10533 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 12:39:32.205420   10533 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 12:39:32.205446   10533 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 12:39:32.205528   10533 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 12:39:32.205605   10533 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 12:39:32.214598   10533 start.go:159] libmachine.API.Create for "no-preload-592000" (driver="qemu2")
	I0731 12:39:32.214615   10533 client.go:168] LocalClient.Create starting
	I0731 12:39:32.214672   10533 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:39:32.214714   10533 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:32.214730   10533 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:32.214769   10533 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:39:32.214791   10533 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:32.214797   10533 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:32.215101   10533 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:39:32.373105   10533 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:32.597900   10533 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 12:39:32.601526   10533 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:32.601536   10533 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:32.601704   10533 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2
	I0731 12:39:32.602515   10533 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 12:39:32.603848   10533 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 12:39:32.611363   10533 main.go:141] libmachine: STDOUT: 
	I0731 12:39:32.611370   10533 main.go:141] libmachine: STDERR: 
	I0731 12:39:32.611419   10533 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2 +20000M
	I0731 12:39:32.619456   10533 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:32.619469   10533 main.go:141] libmachine: STDERR: 
	I0731 12:39:32.619482   10533 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2
	I0731 12:39:32.619485   10533 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:32.619497   10533 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:32.619520   10533 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:11:22:20:75:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2
	I0731 12:39:32.621386   10533 main.go:141] libmachine: STDOUT: 
	I0731 12:39:32.621419   10533 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:32.621441   10533 client.go:171] duration metric: took 406.829208ms to LocalClient.Create
	I0731 12:39:32.654212   10533 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 12:39:32.670861   10533 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 12:39:32.704961   10533 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0731 12:39:32.705675   10533 cache.go:162] opening:  /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0731 12:39:32.868108   10533 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0731 12:39:32.868159   10533 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 676.171708ms
	I0731 12:39:32.868220   10533 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0731 12:39:34.621729   10533 start.go:128] duration metric: took 2.42934875s to createHost
	I0731 12:39:34.621808   10533 start.go:83] releasing machines lock for "no-preload-592000", held for 2.429516791s
	W0731 12:39:34.621858   10533 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:34.645497   10533 out.go:177] * Deleting "no-preload-592000" in qemu2 ...
	W0731 12:39:34.675627   10533 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:34.675648   10533 start.go:729] Will try again in 5 seconds ...
	I0731 12:39:35.753748   10533 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0731 12:39:35.753776   10533 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.561681167s
	I0731 12:39:35.753801   10533 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0731 12:39:35.903398   10533 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0731 12:39:35.903458   10533 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.711487083s
	I0731 12:39:35.903490   10533 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0731 12:39:36.173835   10533 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0731 12:39:36.173889   10533 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 3.981875375s
	I0731 12:39:36.173916   10533 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0731 12:39:36.436373   10533 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0731 12:39:36.436416   10533 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.244605042s
	I0731 12:39:36.436439   10533 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0731 12:39:37.064679   10533 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0731 12:39:37.064726   10533 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.87292575s
	I0731 12:39:37.064757   10533 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0731 12:39:39.342994   10533 cache.go:157] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0731 12:39:39.343045   10533 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 7.15110525s
	I0731 12:39:39.343110   10533 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0731 12:39:39.343141   10533 cache.go:87] Successfully saved all images to host disk.
	I0731 12:39:39.677709   10533 start.go:360] acquireMachinesLock for no-preload-592000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:39.678129   10533 start.go:364] duration metric: took 352µs to acquireMachinesLock for "no-preload-592000"
	I0731 12:39:39.678237   10533 start.go:93] Provisioning new machine with config: &{Name:no-preload-592000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:39.678486   10533 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:39.686938   10533 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:39:39.737389   10533 start.go:159] libmachine.API.Create for "no-preload-592000" (driver="qemu2")
	I0731 12:39:39.737434   10533 client.go:168] LocalClient.Create starting
	I0731 12:39:39.737552   10533 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:39:39.737611   10533 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:39.737640   10533 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:39.737699   10533 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:39:39.737753   10533 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:39.737790   10533 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:39.738318   10533 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:39:39.911220   10533 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:39.982128   10533 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:39.982133   10533 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:39.982334   10533 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2
	I0731 12:39:39.991914   10533 main.go:141] libmachine: STDOUT: 
	I0731 12:39:39.991928   10533 main.go:141] libmachine: STDERR: 
	I0731 12:39:39.991976   10533 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2 +20000M
	I0731 12:39:39.999847   10533 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:39.999869   10533 main.go:141] libmachine: STDERR: 
	I0731 12:39:39.999880   10533 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2
	I0731 12:39:39.999884   10533 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:39.999890   10533 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:39.999931   10533 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:be:b9:39:63:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2
	I0731 12:39:40.001632   10533 main.go:141] libmachine: STDOUT: 
	I0731 12:39:40.001646   10533 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:40.001661   10533 client.go:171] duration metric: took 264.226959ms to LocalClient.Create
	I0731 12:39:42.002795   10533 start.go:128] duration metric: took 2.324328959s to createHost
	I0731 12:39:42.002896   10533 start.go:83] releasing machines lock for "no-preload-592000", held for 2.3247895s
	W0731 12:39:42.003266   10533 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-592000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-592000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:42.012848   10533 out.go:177] 
	W0731 12:39:42.021861   10533 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:42.021892   10533 out.go:239] * 
	* 
	W0731 12:39:42.024636   10533 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:39:42.032788   10533 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-592000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000: exit status 7 (65.693541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-592000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-592000 create -f testdata/busybox.yaml: exit status 1 (29.922041ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-592000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-592000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000: exit status 7 (30.996209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-592000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000: exit status 7 (30.0755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-592000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-592000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-592000 describe deploy/metrics-server -n kube-system: exit status 1 (27.368958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-592000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-592000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000: exit status 7 (30.195333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-592000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-592000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.182010667s)

                                                
                                                
-- stdout --
	* [no-preload-592000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-592000" primary control-plane node in "no-preload-592000" cluster
	* Restarting existing qemu2 VM for "no-preload-592000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-592000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:45.650363   10612 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:45.650533   10612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:45.650536   10612 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:45.650539   10612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:45.650668   10612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:39:45.651892   10612 out.go:298] Setting JSON to false
	I0731 12:39:45.668402   10612 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5948,"bootTime":1722448837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:39:45.668473   10612 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:39:45.673362   10612 out.go:177] * [no-preload-592000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:39:45.681281   10612 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:39:45.681343   10612 notify.go:220] Checking for updates...
	I0731 12:39:45.685868   10612 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:39:45.689347   10612 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:39:45.692327   10612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:39:45.695376   10612 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:39:45.698359   10612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:39:45.701693   10612 config.go:182] Loaded profile config "no-preload-592000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:39:45.701967   10612 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:39:45.706322   10612 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:39:45.713309   10612 start.go:297] selected driver: qemu2
	I0731 12:39:45.713314   10612 start.go:901] validating driver "qemu2" against &{Name:no-preload-592000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-592000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:45.713376   10612 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:39:45.715706   10612 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:39:45.715750   10612 cni.go:84] Creating CNI manager for ""
	I0731 12:39:45.715762   10612 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:39:45.715790   10612 start.go:340] cluster config:
	{Name:no-preload-592000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-592000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:45.719373   10612 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:45.727307   10612 out.go:177] * Starting "no-preload-592000" primary control-plane node in "no-preload-592000" cluster
	I0731 12:39:45.731356   10612 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:39:45.731442   10612 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/no-preload-592000/config.json ...
	I0731 12:39:45.731455   10612 cache.go:107] acquiring lock: {Name:mk9272971e705a314fde8caf791b00ae5a154e1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:45.731473   10612 cache.go:107] acquiring lock: {Name:mkba2caf5bdb670ccb7acffd9ee75001d6c3ef7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:45.731489   10612 cache.go:107] acquiring lock: {Name:mkeafa38309e571c49c78768f6282eb1153f40b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:45.731510   10612 cache.go:115] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 12:39:45.731517   10612 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 71.25µs
	I0731 12:39:45.731524   10612 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 12:39:45.731525   10612 cache.go:115] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0731 12:39:45.731531   10612 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 57.5µs
	I0731 12:39:45.731535   10612 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0731 12:39:45.731533   10612 cache.go:107] acquiring lock: {Name:mkf6f5ca97a66123ce5cc70d761e717cf501951d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:45.731546   10612 cache.go:115] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0731 12:39:45.731549   10612 cache.go:107] acquiring lock: {Name:mk52a9ded86938ba8826f1dd57a1b4754b12ba24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:45.731552   10612 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 90.667µs
	I0731 12:39:45.731561   10612 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0731 12:39:45.731568   10612 cache.go:115] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0731 12:39:45.731597   10612 cache.go:107] acquiring lock: {Name:mk081d2b1ffb72d5f869cb373ce00ca3de02a3fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:45.731597   10612 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 64.333µs
	I0731 12:39:45.731603   10612 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0731 12:39:45.731599   10612 cache.go:107] acquiring lock: {Name:mk2a10cbd9b2c148721d1e8923dcb9bb26170be7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:45.731457   10612 cache.go:107] acquiring lock: {Name:mkf771878d659eb5e064bcb0de049baf36f76397 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:45.731585   10612 cache.go:115] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0731 12:39:45.731637   10612 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 77.667µs
	I0731 12:39:45.731651   10612 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0731 12:39:45.731665   10612 cache.go:115] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0731 12:39:45.731670   10612 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 88.709µs
	I0731 12:39:45.731677   10612 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0731 12:39:45.731687   10612 cache.go:115] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0731 12:39:45.731692   10612 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 99.625µs
	I0731 12:39:45.731695   10612 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0731 12:39:45.731759   10612 cache.go:115] /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0731 12:39:45.731764   10612 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 316.833µs
	I0731 12:39:45.731769   10612 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0731 12:39:45.731773   10612 cache.go:87] Successfully saved all images to host disk.
	I0731 12:39:45.731860   10612 start.go:360] acquireMachinesLock for no-preload-592000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:45.731887   10612 start.go:364] duration metric: took 21.917µs to acquireMachinesLock for "no-preload-592000"
	I0731 12:39:45.731895   10612 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:39:45.731899   10612 fix.go:54] fixHost starting: 
	I0731 12:39:45.732011   10612 fix.go:112] recreateIfNeeded on no-preload-592000: state=Stopped err=<nil>
	W0731 12:39:45.732020   10612 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:39:45.739330   10612 out.go:177] * Restarting existing qemu2 VM for "no-preload-592000" ...
	I0731 12:39:45.743349   10612 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:45.743392   10612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:be:b9:39:63:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2
	I0731 12:39:45.745342   10612 main.go:141] libmachine: STDOUT: 
	I0731 12:39:45.745361   10612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:45.745390   10612 fix.go:56] duration metric: took 13.490667ms for fixHost
	I0731 12:39:45.745394   10612 start.go:83] releasing machines lock for "no-preload-592000", held for 13.503458ms
	W0731 12:39:45.745401   10612 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:45.745431   10612 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:45.745436   10612 start.go:729] Will try again in 5 seconds ...
	I0731 12:39:50.747571   10612 start.go:360] acquireMachinesLock for no-preload-592000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:50.747918   10612 start.go:364] duration metric: took 283.709µs to acquireMachinesLock for "no-preload-592000"
	I0731 12:39:50.748045   10612 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:39:50.748063   10612 fix.go:54] fixHost starting: 
	I0731 12:39:50.748728   10612 fix.go:112] recreateIfNeeded on no-preload-592000: state=Stopped err=<nil>
	W0731 12:39:50.748753   10612 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:39:50.757116   10612 out.go:177] * Restarting existing qemu2 VM for "no-preload-592000" ...
	I0731 12:39:50.760017   10612 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:50.760250   10612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:be:b9:39:63:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/no-preload-592000/disk.qcow2
	I0731 12:39:50.768875   10612 main.go:141] libmachine: STDOUT: 
	I0731 12:39:50.768944   10612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:50.769067   10612 fix.go:56] duration metric: took 21.004584ms for fixHost
	I0731 12:39:50.769108   10612 start.go:83] releasing machines lock for "no-preload-592000", held for 21.141666ms
	W0731 12:39:50.769283   10612 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-592000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-592000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:50.778096   10612 out.go:177] 
	W0731 12:39:50.781174   10612 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:39:50.781201   10612 out.go:239] * 
	* 
	W0731 12:39:50.783634   10612 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:39:50.791176   10612 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-592000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000: exit status 7 (67.312416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-592000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000: exit status 7 (32.225458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-592000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-592000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-592000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.747667ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-592000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-592000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000: exit status 7 (30.075875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-592000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000: exit status 7 (30.169958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-592000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-592000 --alsologtostderr -v=1: exit status 83 (40.417459ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-592000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-592000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:51.060310   10631 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:51.060457   10631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:51.060461   10631 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:51.060463   10631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:51.060581   10631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:39:51.060809   10631 out.go:298] Setting JSON to false
	I0731 12:39:51.060817   10631 mustload.go:65] Loading cluster: no-preload-592000
	I0731 12:39:51.061000   10631 config.go:182] Loaded profile config "no-preload-592000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:39:51.064655   10631 out.go:177] * The control-plane node no-preload-592000 host is not running: state=Stopped
	I0731 12:39:51.068444   10631 out.go:177]   To start a cluster, run: "minikube start -p no-preload-592000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-592000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000: exit status 7 (29.243041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-592000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000: exit status 7 (29.43325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-592000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-401000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-401000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.916014542s)

                                                
                                                
-- stdout --
	* [embed-certs-401000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-401000" primary control-plane node in "embed-certs-401000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-401000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:39:51.376822   10648 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:39:51.376971   10648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:51.376975   10648 out.go:304] Setting ErrFile to fd 2...
	I0731 12:39:51.376978   10648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:39:51.377090   10648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:39:51.378111   10648 out.go:298] Setting JSON to false
	I0731 12:39:51.394283   10648 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5954,"bootTime":1722448837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:39:51.394353   10648 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:39:51.398571   10648 out.go:177] * [embed-certs-401000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:39:51.404528   10648 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:39:51.404659   10648 notify.go:220] Checking for updates...
	I0731 12:39:51.411505   10648 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:39:51.414556   10648 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:39:51.417514   10648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:39:51.420539   10648 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:39:51.423488   10648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:39:51.426869   10648 config.go:182] Loaded profile config "cert-expiration-657000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:51.426932   10648 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:39:51.426977   10648 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:39:51.431505   10648 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:39:51.438520   10648 start.go:297] selected driver: qemu2
	I0731 12:39:51.438539   10648 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:39:51.438549   10648 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:39:51.440927   10648 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:39:51.443557   10648 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:39:51.445095   10648 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:39:51.445116   10648 cni.go:84] Creating CNI manager for ""
	I0731 12:39:51.445132   10648 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:39:51.445135   10648 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:39:51.445168   10648 start.go:340] cluster config:
	{Name:embed-certs-401000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:39:51.448931   10648 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:39:51.456547   10648 out.go:177] * Starting "embed-certs-401000" primary control-plane node in "embed-certs-401000" cluster
	I0731 12:39:51.460476   10648 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:39:51.460494   10648 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:39:51.460505   10648 cache.go:56] Caching tarball of preloaded images
	I0731 12:39:51.460576   10648 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:39:51.460582   10648 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:39:51.460651   10648 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/embed-certs-401000/config.json ...
	I0731 12:39:51.460663   10648 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/embed-certs-401000/config.json: {Name:mk28ba04aa68ea4075331e2a9f8ae74b4fd7a3e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:39:51.460887   10648 start.go:360] acquireMachinesLock for embed-certs-401000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:51.460922   10648 start.go:364] duration metric: took 29.166µs to acquireMachinesLock for "embed-certs-401000"
	I0731 12:39:51.460933   10648 start.go:93] Provisioning new machine with config: &{Name:embed-certs-401000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:51.460957   10648 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:51.469526   10648 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:39:51.486796   10648 start.go:159] libmachine.API.Create for "embed-certs-401000" (driver="qemu2")
	I0731 12:39:51.486828   10648 client.go:168] LocalClient.Create starting
	I0731 12:39:51.486889   10648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:39:51.486921   10648 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:51.486930   10648 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:51.486975   10648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:39:51.487008   10648 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:51.487020   10648 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:51.487398   10648 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:39:51.643828   10648 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:51.804015   10648 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:51.804021   10648 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:51.804204   10648 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2
	I0731 12:39:51.813841   10648 main.go:141] libmachine: STDOUT: 
	I0731 12:39:51.813860   10648 main.go:141] libmachine: STDERR: 
	I0731 12:39:51.813913   10648 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2 +20000M
	I0731 12:39:51.821810   10648 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:51.821824   10648 main.go:141] libmachine: STDERR: 
	I0731 12:39:51.821844   10648 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2
	I0731 12:39:51.821849   10648 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:51.821864   10648 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:51.821889   10648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:a0:21:e7:54:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2
	I0731 12:39:51.823473   10648 main.go:141] libmachine: STDOUT: 
	I0731 12:39:51.823492   10648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:51.823509   10648 client.go:171] duration metric: took 336.683459ms to LocalClient.Create
	I0731 12:39:53.825651   10648 start.go:128] duration metric: took 2.364719833s to createHost
	I0731 12:39:53.825717   10648 start.go:83] releasing machines lock for "embed-certs-401000", held for 2.364837375s
	W0731 12:39:53.825833   10648 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:53.838063   10648 out.go:177] * Deleting "embed-certs-401000" in qemu2 ...
	W0731 12:39:53.868985   10648 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:39:53.869010   10648 start.go:729] Will try again in 5 seconds ...
	I0731 12:39:58.871119   10648 start.go:360] acquireMachinesLock for embed-certs-401000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:39:58.871592   10648 start.go:364] duration metric: took 370.167µs to acquireMachinesLock for "embed-certs-401000"
	I0731 12:39:58.871726   10648 start.go:93] Provisioning new machine with config: &{Name:embed-certs-401000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:39:58.872007   10648 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:39:58.890697   10648 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:39:58.942387   10648 start.go:159] libmachine.API.Create for "embed-certs-401000" (driver="qemu2")
	I0731 12:39:58.942425   10648 client.go:168] LocalClient.Create starting
	I0731 12:39:58.942528   10648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:39:58.942587   10648 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:58.942612   10648 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:58.942675   10648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:39:58.942718   10648 main.go:141] libmachine: Decoding PEM data...
	I0731 12:39:58.942733   10648 main.go:141] libmachine: Parsing certificate...
	I0731 12:39:58.943245   10648 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:39:59.109751   10648 main.go:141] libmachine: Creating SSH key...
	I0731 12:39:59.198410   10648 main.go:141] libmachine: Creating Disk image...
	I0731 12:39:59.198417   10648 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:39:59.198589   10648 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2
	I0731 12:39:59.207650   10648 main.go:141] libmachine: STDOUT: 
	I0731 12:39:59.207674   10648 main.go:141] libmachine: STDERR: 
	I0731 12:39:59.207725   10648 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2 +20000M
	I0731 12:39:59.215504   10648 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:39:59.215518   10648 main.go:141] libmachine: STDERR: 
	I0731 12:39:59.215533   10648 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2
	I0731 12:39:59.215536   10648 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:39:59.215545   10648 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:39:59.215573   10648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:45:b0:54:cf:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2
	I0731 12:39:59.217147   10648 main.go:141] libmachine: STDOUT: 
	I0731 12:39:59.217164   10648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:39:59.217175   10648 client.go:171] duration metric: took 274.75175ms to LocalClient.Create
	I0731 12:40:01.219311   10648 start.go:128] duration metric: took 2.347324333s to createHost
	I0731 12:40:01.219363   10648 start.go:83] releasing machines lock for "embed-certs-401000", held for 2.347792167s
	W0731 12:40:01.219705   10648 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:01.230257   10648 out.go:177] 
	W0731 12:40:01.240300   10648 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:01.240339   10648 out.go:239] * 
	* 
	W0731 12:40:01.243193   10648 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:01.251196   10648 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-401000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000: exit status 7 (66.751291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-401000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-401000 create -f testdata/busybox.yaml: exit status 1 (29.834583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-401000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-401000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000: exit status 7 (29.34625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-401000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000: exit status 7 (29.7045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-401000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-401000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-401000 describe deploy/metrics-server -n kube-system: exit status 1 (26.770166ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-401000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-401000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000: exit status 7 (29.879917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-401000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-401000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (7.376044s)

                                                
                                                
-- stdout --
	* [embed-certs-401000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-401000" primary control-plane node in "embed-certs-401000" cluster
	* Restarting existing qemu2 VM for "embed-certs-401000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-401000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:03.766288   10697 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:03.766397   10697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:03.766400   10697 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:03.766403   10697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:03.766522   10697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:40:03.767519   10697 out.go:298] Setting JSON to false
	I0731 12:40:03.783601   10697 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5966,"bootTime":1722448837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:40:03.783665   10697 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:40:03.788266   10697 out.go:177] * [embed-certs-401000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:40:03.795222   10697 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:40:03.795268   10697 notify.go:220] Checking for updates...
	I0731 12:40:03.802221   10697 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:40:03.805216   10697 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:40:03.808213   10697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:40:03.811160   10697 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:40:03.814205   10697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:40:03.817499   10697 config.go:182] Loaded profile config "embed-certs-401000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:03.817756   10697 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:40:03.822128   10697 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:40:03.829244   10697 start.go:297] selected driver: qemu2
	I0731 12:40:03.829251   10697 start.go:901] validating driver "qemu2" against &{Name:embed-certs-401000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:03.829322   10697 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:40:03.831649   10697 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:40:03.831675   10697 cni.go:84] Creating CNI manager for ""
	I0731 12:40:03.831682   10697 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:40:03.831705   10697 start.go:340] cluster config:
	{Name:embed-certs-401000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-401000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:03.835293   10697 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:03.843161   10697 out.go:177] * Starting "embed-certs-401000" primary control-plane node in "embed-certs-401000" cluster
	I0731 12:40:03.846149   10697 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:40:03.846167   10697 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:40:03.846193   10697 cache.go:56] Caching tarball of preloaded images
	I0731 12:40:03.846255   10697 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:40:03.846261   10697 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:40:03.846319   10697 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/embed-certs-401000/config.json ...
	I0731 12:40:03.846804   10697 start.go:360] acquireMachinesLock for embed-certs-401000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:03.846834   10697 start.go:364] duration metric: took 23.958µs to acquireMachinesLock for "embed-certs-401000"
	I0731 12:40:03.846843   10697 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:40:03.846849   10697 fix.go:54] fixHost starting: 
	I0731 12:40:03.846970   10697 fix.go:112] recreateIfNeeded on embed-certs-401000: state=Stopped err=<nil>
	W0731 12:40:03.846979   10697 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:40:03.855026   10697 out.go:177] * Restarting existing qemu2 VM for "embed-certs-401000" ...
	I0731 12:40:03.859130   10697 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:03.859167   10697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:45:b0:54:cf:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2
	I0731 12:40:03.861217   10697 main.go:141] libmachine: STDOUT: 
	I0731 12:40:03.861237   10697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:03.861266   10697 fix.go:56] duration metric: took 14.418ms for fixHost
	I0731 12:40:03.861270   10697 start.go:83] releasing machines lock for "embed-certs-401000", held for 14.431459ms
	W0731 12:40:03.861278   10697 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:03.861317   10697 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:03.861322   10697 start.go:729] Will try again in 5 seconds ...
	I0731 12:40:08.863327   10697 start.go:360] acquireMachinesLock for embed-certs-401000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:11.044615   10697 start.go:364] duration metric: took 2.181261875s to acquireMachinesLock for "embed-certs-401000"
	I0731 12:40:11.044724   10697 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:40:11.044748   10697 fix.go:54] fixHost starting: 
	I0731 12:40:11.045615   10697 fix.go:112] recreateIfNeeded on embed-certs-401000: state=Stopped err=<nil>
	W0731 12:40:11.045646   10697 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:40:11.051280   10697 out.go:177] * Restarting existing qemu2 VM for "embed-certs-401000" ...
	I0731 12:40:11.068268   10697 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:11.068556   10697 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:45:b0:54:cf:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/embed-certs-401000/disk.qcow2
	I0731 12:40:11.078215   10697 main.go:141] libmachine: STDOUT: 
	I0731 12:40:11.078274   10697 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:11.078352   10697 fix.go:56] duration metric: took 33.608625ms for fixHost
	I0731 12:40:11.078369   10697 start.go:83] releasing machines lock for "embed-certs-401000", held for 33.699333ms
	W0731 12:40:11.078589   10697 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-401000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-401000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:11.087027   10697 out.go:177] 
	W0731 12:40:11.091296   10697 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:11.091323   10697 out.go:239] * 
	* 
	W0731 12:40:11.093447   10697 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:11.102172   10697 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-401000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000: exit status 7 (59.699125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-819000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-819000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.889260375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-819000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-819000" primary control-plane node in "default-k8s-diff-port-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:08.685292   10717 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:08.685413   10717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:08.685417   10717 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:08.685419   10717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:08.685559   10717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:40:08.686568   10717 out.go:298] Setting JSON to false
	I0731 12:40:08.702582   10717 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5971,"bootTime":1722448837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:40:08.702755   10717 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:40:08.706965   10717 out.go:177] * [default-k8s-diff-port-819000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:40:08.713826   10717 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:40:08.713908   10717 notify.go:220] Checking for updates...
	I0731 12:40:08.719836   10717 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:40:08.722820   10717 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:40:08.725914   10717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:40:08.728782   10717 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:40:08.731870   10717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:40:08.735175   10717 config.go:182] Loaded profile config "embed-certs-401000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:08.735235   10717 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:08.735276   10717 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:40:08.739806   10717 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:40:08.746839   10717 start.go:297] selected driver: qemu2
	I0731 12:40:08.746846   10717 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:40:08.746858   10717 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:40:08.749084   10717 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:40:08.751819   10717 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:40:08.754927   10717 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:40:08.754943   10717 cni.go:84] Creating CNI manager for ""
	I0731 12:40:08.754951   10717 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:40:08.754960   10717 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:40:08.754988   10717 start.go:340] cluster config:
	{Name:default-k8s-diff-port-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:08.758679   10717 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:08.765767   10717 out.go:177] * Starting "default-k8s-diff-port-819000" primary control-plane node in "default-k8s-diff-port-819000" cluster
	I0731 12:40:08.769858   10717 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:40:08.769876   10717 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:40:08.769886   10717 cache.go:56] Caching tarball of preloaded images
	I0731 12:40:08.769956   10717 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:40:08.769962   10717 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:40:08.770033   10717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/default-k8s-diff-port-819000/config.json ...
	I0731 12:40:08.770043   10717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/default-k8s-diff-port-819000/config.json: {Name:mk0593fef622ef80ded8ae7707ca5c97cb9618fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:40:08.770417   10717 start.go:360] acquireMachinesLock for default-k8s-diff-port-819000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:08.770454   10717 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "default-k8s-diff-port-819000"
	I0731 12:40:08.770464   10717 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:40:08.770495   10717 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:40:08.779878   10717 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:40:08.797652   10717 start.go:159] libmachine.API.Create for "default-k8s-diff-port-819000" (driver="qemu2")
	I0731 12:40:08.797681   10717 client.go:168] LocalClient.Create starting
	I0731 12:40:08.797736   10717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:40:08.797779   10717 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:08.797790   10717 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:08.797839   10717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:40:08.797863   10717 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:08.797870   10717 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:08.798376   10717 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:40:08.949339   10717 main.go:141] libmachine: Creating SSH key...
	I0731 12:40:09.023077   10717 main.go:141] libmachine: Creating Disk image...
	I0731 12:40:09.023082   10717 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:40:09.023257   10717 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2
	I0731 12:40:09.032584   10717 main.go:141] libmachine: STDOUT: 
	I0731 12:40:09.032600   10717 main.go:141] libmachine: STDERR: 
	I0731 12:40:09.032670   10717 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2 +20000M
	I0731 12:40:09.040442   10717 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:40:09.040455   10717 main.go:141] libmachine: STDERR: 
	I0731 12:40:09.040464   10717 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2
	I0731 12:40:09.040473   10717 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:40:09.040489   10717 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:09.040517   10717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:6b:1b:fa:77:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2
	I0731 12:40:09.042103   10717 main.go:141] libmachine: STDOUT: 
	I0731 12:40:09.042117   10717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:09.042140   10717 client.go:171] duration metric: took 244.46075ms to LocalClient.Create
	I0731 12:40:11.044279   10717 start.go:128] duration metric: took 2.273809833s to createHost
	I0731 12:40:11.044394   10717 start.go:83] releasing machines lock for "default-k8s-diff-port-819000", held for 2.273936834s
	W0731 12:40:11.044464   10717 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:11.064205   10717 out.go:177] * Deleting "default-k8s-diff-port-819000" in qemu2 ...
	W0731 12:40:11.118152   10717 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:11.118190   10717 start.go:729] Will try again in 5 seconds ...
	I0731 12:40:16.120290   10717 start.go:360] acquireMachinesLock for default-k8s-diff-port-819000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:16.120738   10717 start.go:364] duration metric: took 364.375µs to acquireMachinesLock for "default-k8s-diff-port-819000"
	I0731 12:40:16.120903   10717 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:40:16.121149   10717 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:40:16.138010   10717 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:40:16.189129   10717 start.go:159] libmachine.API.Create for "default-k8s-diff-port-819000" (driver="qemu2")
	I0731 12:40:16.189191   10717 client.go:168] LocalClient.Create starting
	I0731 12:40:16.189308   10717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:40:16.189365   10717 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:16.189383   10717 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:16.189449   10717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:40:16.189496   10717 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:16.189508   10717 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:16.190215   10717 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:40:16.353849   10717 main.go:141] libmachine: Creating SSH key...
	I0731 12:40:16.480636   10717 main.go:141] libmachine: Creating Disk image...
	I0731 12:40:16.480641   10717 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:40:16.480805   10717 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2
	I0731 12:40:16.489915   10717 main.go:141] libmachine: STDOUT: 
	I0731 12:40:16.489932   10717 main.go:141] libmachine: STDERR: 
	I0731 12:40:16.489977   10717 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2 +20000M
	I0731 12:40:16.497718   10717 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:40:16.497730   10717 main.go:141] libmachine: STDERR: 
	I0731 12:40:16.497742   10717 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2
	I0731 12:40:16.497746   10717 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:40:16.497760   10717 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:16.497795   10717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b9:a8:10:9f:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2
	I0731 12:40:16.499439   10717 main.go:141] libmachine: STDOUT: 
	I0731 12:40:16.499454   10717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:16.499467   10717 client.go:171] duration metric: took 310.274ms to LocalClient.Create
	I0731 12:40:18.501593   10717 start.go:128] duration metric: took 2.380463375s to createHost
	I0731 12:40:18.501655   10717 start.go:83] releasing machines lock for "default-k8s-diff-port-819000", held for 2.380942834s
	W0731 12:40:18.501983   10717 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:18.511519   10717 out.go:177] 
	W0731 12:40:18.519708   10717 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:18.519739   10717 out.go:239] * 
	* 
	W0731 12:40:18.522243   10717 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:18.532668   10717 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-819000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000: exit status 7 (65.78725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-401000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000: exit status 7 (30.739166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-401000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-401000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-401000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.169208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-401000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-401000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000: exit status 7 (28.301958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-401000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000: exit status 7 (29.136542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-401000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-401000 --alsologtostderr -v=1: exit status 83 (46.564542ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-401000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-401000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:11.359902   10739 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:11.360038   10739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:11.360041   10739 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:11.360043   10739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:11.360150   10739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:40:11.360364   10739 out.go:298] Setting JSON to false
	I0731 12:40:11.360370   10739 mustload.go:65] Loading cluster: embed-certs-401000
	I0731 12:40:11.360558   10739 config.go:182] Loaded profile config "embed-certs-401000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:11.365239   10739 out.go:177] * The control-plane node embed-certs-401000 host is not running: state=Stopped
	I0731 12:40:11.374249   10739 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-401000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-401000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000: exit status 7 (28.831166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-401000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000: exit status 7 (28.375041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-401000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-207000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-207000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.896344208s)

                                                
                                                
-- stdout --
	* [newest-cni-207000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-207000" primary control-plane node in "newest-cni-207000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-207000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:11.676572   10756 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:11.676679   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:11.676683   10756 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:11.676685   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:11.676801   10756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:40:11.677864   10756 out.go:298] Setting JSON to false
	I0731 12:40:11.693880   10756 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5974,"bootTime":1722448837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:40:11.693939   10756 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:40:11.699228   10756 out.go:177] * [newest-cni-207000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:40:11.706171   10756 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:40:11.706235   10756 notify.go:220] Checking for updates...
	I0731 12:40:11.713148   10756 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:40:11.716150   10756 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:40:11.719197   10756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:40:11.722168   10756 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:40:11.725191   10756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:40:11.728411   10756 config.go:182] Loaded profile config "default-k8s-diff-port-819000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:11.728483   10756 config.go:182] Loaded profile config "multinode-684000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:11.728547   10756 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:40:11.732059   10756 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:40:11.739118   10756 start.go:297] selected driver: qemu2
	I0731 12:40:11.739124   10756 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:40:11.739130   10756 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:40:11.741450   10756 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0731 12:40:11.741481   10756 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0731 12:40:11.750121   10756 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:40:11.753252   10756 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 12:40:11.753298   10756 cni.go:84] Creating CNI manager for ""
	I0731 12:40:11.753306   10756 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:40:11.753311   10756 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:40:11.753343   10756 start.go:340] cluster config:
	{Name:newest-cni-207000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-207000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:11.757018   10756 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:11.764138   10756 out.go:177] * Starting "newest-cni-207000" primary control-plane node in "newest-cni-207000" cluster
	I0731 12:40:11.767032   10756 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:40:11.767051   10756 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:40:11.767064   10756 cache.go:56] Caching tarball of preloaded images
	I0731 12:40:11.767121   10756 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:40:11.767127   10756 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 12:40:11.767191   10756 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/newest-cni-207000/config.json ...
	I0731 12:40:11.767202   10756 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/newest-cni-207000/config.json: {Name:mk88689c873972e061204e988d49d64c9a74a28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:40:11.767559   10756 start.go:360] acquireMachinesLock for newest-cni-207000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:11.767595   10756 start.go:364] duration metric: took 30.292µs to acquireMachinesLock for "newest-cni-207000"
	I0731 12:40:11.767607   10756 start.go:93] Provisioning new machine with config: &{Name:newest-cni-207000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-207000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:40:11.767647   10756 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:40:11.775168   10756 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:40:11.793410   10756 start.go:159] libmachine.API.Create for "newest-cni-207000" (driver="qemu2")
	I0731 12:40:11.793452   10756 client.go:168] LocalClient.Create starting
	I0731 12:40:11.793511   10756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:40:11.793542   10756 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:11.793551   10756 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:11.793594   10756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:40:11.793617   10756 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:11.793625   10756 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:11.794061   10756 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:40:11.952632   10756 main.go:141] libmachine: Creating SSH key...
	I0731 12:40:12.071222   10756 main.go:141] libmachine: Creating Disk image...
	I0731 12:40:12.071228   10756 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:40:12.071405   10756 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2
	I0731 12:40:12.080644   10756 main.go:141] libmachine: STDOUT: 
	I0731 12:40:12.080667   10756 main.go:141] libmachine: STDERR: 
	I0731 12:40:12.080732   10756 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2 +20000M
	I0731 12:40:12.088511   10756 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:40:12.088525   10756 main.go:141] libmachine: STDERR: 
	I0731 12:40:12.088535   10756 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2
	I0731 12:40:12.088539   10756 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:40:12.088552   10756 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:12.088582   10756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:0f:78:ae:42:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2
	I0731 12:40:12.090197   10756 main.go:141] libmachine: STDOUT: 
	I0731 12:40:12.090213   10756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:12.090231   10756 client.go:171] duration metric: took 296.781ms to LocalClient.Create
	I0731 12:40:14.092455   10756 start.go:128] duration metric: took 2.324798417s to createHost
	I0731 12:40:14.092568   10756 start.go:83] releasing machines lock for "newest-cni-207000", held for 2.325013166s
	W0731 12:40:14.092628   10756 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:14.103616   10756 out.go:177] * Deleting "newest-cni-207000" in qemu2 ...
	W0731 12:40:14.136760   10756 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:14.136814   10756 start.go:729] Will try again in 5 seconds ...
	I0731 12:40:19.138935   10756 start.go:360] acquireMachinesLock for newest-cni-207000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:19.139463   10756 start.go:364] duration metric: took 433.875µs to acquireMachinesLock for "newest-cni-207000"
	I0731 12:40:19.139563   10756 start.go:93] Provisioning new machine with config: &{Name:newest-cni-207000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-207000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:40:19.139864   10756 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:40:19.149399   10756 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:40:19.202810   10756 start.go:159] libmachine.API.Create for "newest-cni-207000" (driver="qemu2")
	I0731 12:40:19.202862   10756 client.go:168] LocalClient.Create starting
	I0731 12:40:19.202971   10756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/ca.pem
	I0731 12:40:19.203011   10756 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:19.203027   10756 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:19.203084   10756 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19355-6494/.minikube/certs/cert.pem
	I0731 12:40:19.203113   10756 main.go:141] libmachine: Decoding PEM data...
	I0731 12:40:19.203128   10756 main.go:141] libmachine: Parsing certificate...
	I0731 12:40:19.203667   10756 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso...
	I0731 12:40:19.370578   10756 main.go:141] libmachine: Creating SSH key...
	I0731 12:40:19.478612   10756 main.go:141] libmachine: Creating Disk image...
	I0731 12:40:19.478619   10756 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:40:19.478790   10756 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2.raw /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2
	I0731 12:40:19.487858   10756 main.go:141] libmachine: STDOUT: 
	I0731 12:40:19.487876   10756 main.go:141] libmachine: STDERR: 
	I0731 12:40:19.487935   10756 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2 +20000M
	I0731 12:40:19.495809   10756 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:40:19.495828   10756 main.go:141] libmachine: STDERR: 
	I0731 12:40:19.495838   10756 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2
	I0731 12:40:19.495841   10756 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:40:19.495852   10756 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:19.495880   10756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:02:db:1e:e6:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2
	I0731 12:40:19.497510   10756 main.go:141] libmachine: STDOUT: 
	I0731 12:40:19.497524   10756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:19.497536   10756 client.go:171] duration metric: took 294.675916ms to LocalClient.Create
	I0731 12:40:21.499686   10756 start.go:128] duration metric: took 2.359840292s to createHost
	I0731 12:40:21.499751   10756 start.go:83] releasing machines lock for "newest-cni-207000", held for 2.360308583s
	W0731 12:40:21.500113   10756 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-207000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-207000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:21.509798   10756 out.go:177] 
	W0731 12:40:21.520758   10756 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:21.520802   10756 out.go:239] * 
	* 
	W0731 12:40:21.523273   10756 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:21.533755   10756 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-207000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000: exit status 7 (61.075167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-819000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-819000 create -f testdata/busybox.yaml: exit status 1 (29.529958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-819000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-819000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000: exit status 7 (29.225792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-819000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000: exit status 7 (28.150292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-819000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-819000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-819000 describe deploy/metrics-server -n kube-system: exit status 1 (26.880208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-819000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-819000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000: exit status 7 (28.888166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-819000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-819000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.824190958s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-819000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-819000" primary control-plane node in "default-k8s-diff-port-819000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-819000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-819000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:20.796849   10802 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:20.796965   10802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:20.796968   10802 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:20.796970   10802 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:20.797095   10802 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:40:20.798061   10802 out.go:298] Setting JSON to false
	I0731 12:40:20.813998   10802 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5983,"bootTime":1722448837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:40:20.814070   10802 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:40:20.818811   10802 out.go:177] * [default-k8s-diff-port-819000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:40:20.825818   10802 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:40:20.825894   10802 notify.go:220] Checking for updates...
	I0731 12:40:20.832849   10802 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:40:20.835838   10802 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:40:20.838828   10802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:40:20.841774   10802 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:40:20.844862   10802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:40:20.848038   10802 config.go:182] Loaded profile config "default-k8s-diff-port-819000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:20.848321   10802 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:40:20.852813   10802 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:40:20.858736   10802 start.go:297] selected driver: qemu2
	I0731 12:40:20.858742   10802 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:20.858796   10802 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:40:20.861013   10802 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:40:20.861037   10802 cni.go:84] Creating CNI manager for ""
	I0731 12:40:20.861044   10802 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:40:20.861081   10802 start.go:340] cluster config:
	{Name:default-k8s-diff-port-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-819000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:20.864620   10802 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:20.871769   10802 out.go:177] * Starting "default-k8s-diff-port-819000" primary control-plane node in "default-k8s-diff-port-819000" cluster
	I0731 12:40:20.875789   10802 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:40:20.875804   10802 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:40:20.875813   10802 cache.go:56] Caching tarball of preloaded images
	I0731 12:40:20.875871   10802 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:40:20.875876   10802 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:40:20.875942   10802 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/default-k8s-diff-port-819000/config.json ...
	I0731 12:40:20.876382   10802 start.go:360] acquireMachinesLock for default-k8s-diff-port-819000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:21.499929   10802 start.go:364] duration metric: took 623.538292ms to acquireMachinesLock for "default-k8s-diff-port-819000"
	I0731 12:40:21.500095   10802 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:40:21.500129   10802 fix.go:54] fixHost starting: 
	I0731 12:40:21.500799   10802 fix.go:112] recreateIfNeeded on default-k8s-diff-port-819000: state=Stopped err=<nil>
	W0731 12:40:21.500844   10802 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:40:21.516770   10802 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-819000" ...
	I0731 12:40:21.523799   10802 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:21.523986   10802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b9:a8:10:9f:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2
	I0731 12:40:21.534462   10802 main.go:141] libmachine: STDOUT: 
	I0731 12:40:21.534569   10802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:21.534700   10802 fix.go:56] duration metric: took 34.578875ms for fixHost
	I0731 12:40:21.534722   10802 start.go:83] releasing machines lock for "default-k8s-diff-port-819000", held for 34.742458ms
	W0731 12:40:21.534770   10802 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:21.534935   10802 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:21.534955   10802 start.go:729] Will try again in 5 seconds ...
	I0731 12:40:26.537072   10802 start.go:360] acquireMachinesLock for default-k8s-diff-port-819000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:26.537407   10802 start.go:364] duration metric: took 265.917µs to acquireMachinesLock for "default-k8s-diff-port-819000"
	I0731 12:40:26.537491   10802 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:40:26.537511   10802 fix.go:54] fixHost starting: 
	I0731 12:40:26.538323   10802 fix.go:112] recreateIfNeeded on default-k8s-diff-port-819000: state=Stopped err=<nil>
	W0731 12:40:26.538355   10802 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:40:26.543897   10802 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-819000" ...
	I0731 12:40:26.551790   10802 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:26.552023   10802 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b9:a8:10:9f:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/default-k8s-diff-port-819000/disk.qcow2
	I0731 12:40:26.561091   10802 main.go:141] libmachine: STDOUT: 
	I0731 12:40:26.561153   10802 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:26.561220   10802 fix.go:56] duration metric: took 23.711042ms for fixHost
	I0731 12:40:26.561237   10802 start.go:83] releasing machines lock for "default-k8s-diff-port-819000", held for 23.809959ms
	W0731 12:40:26.561410   10802 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-819000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-819000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:26.568779   10802 out.go:177] 
	W0731 12:40:26.571868   10802 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:26.571917   10802 out.go:239] * 
	* 
	W0731 12:40:26.574590   10802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:26.581789   10802 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-819000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000: exit status 7 (66.124625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-207000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-207000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.182589167s)

                                                
                                                
-- stdout --
	* [newest-cni-207000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-207000" primary control-plane node in "newest-cni-207000" cluster
	* Restarting existing qemu2 VM for "newest-cni-207000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-207000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:25.172107   10835 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:25.172246   10835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:25.172249   10835 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:25.172252   10835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:25.172387   10835 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:40:25.173393   10835 out.go:298] Setting JSON to false
	I0731 12:40:25.189371   10835 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5988,"bootTime":1722448837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:40:25.189438   10835 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:40:25.194229   10835 out.go:177] * [newest-cni-207000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:40:25.201266   10835 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:40:25.201329   10835 notify.go:220] Checking for updates...
	I0731 12:40:25.208256   10835 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:40:25.211164   10835 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:40:25.214241   10835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:40:25.217221   10835 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:40:25.220163   10835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:40:25.223438   10835 config.go:182] Loaded profile config "newest-cni-207000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:40:25.223732   10835 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:40:25.228205   10835 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:40:25.235190   10835 start.go:297] selected driver: qemu2
	I0731 12:40:25.235197   10835 start.go:901] validating driver "qemu2" against &{Name:newest-cni-207000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-207000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:25.235259   10835 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:40:25.237614   10835 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 12:40:25.237657   10835 cni.go:84] Creating CNI manager for ""
	I0731 12:40:25.237664   10835 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:40:25.237691   10835 start.go:340] cluster config:
	{Name:newest-cni-207000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-207000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:40:25.241291   10835 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:40:25.247184   10835 out.go:177] * Starting "newest-cni-207000" primary control-plane node in "newest-cni-207000" cluster
	I0731 12:40:25.251239   10835 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:40:25.251260   10835 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:40:25.251269   10835 cache.go:56] Caching tarball of preloaded images
	I0731 12:40:25.251330   10835 preload.go:172] Found /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:40:25.251335   10835 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 12:40:25.251391   10835 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/newest-cni-207000/config.json ...
	I0731 12:40:25.251816   10835 start.go:360] acquireMachinesLock for newest-cni-207000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:25.251844   10835 start.go:364] duration metric: took 21.75µs to acquireMachinesLock for "newest-cni-207000"
	I0731 12:40:25.251852   10835 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:40:25.251856   10835 fix.go:54] fixHost starting: 
	I0731 12:40:25.251970   10835 fix.go:112] recreateIfNeeded on newest-cni-207000: state=Stopped err=<nil>
	W0731 12:40:25.251979   10835 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:40:25.256172   10835 out.go:177] * Restarting existing qemu2 VM for "newest-cni-207000" ...
	I0731 12:40:25.264194   10835 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:25.264225   10835 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:02:db:1e:e6:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2
	I0731 12:40:25.266109   10835 main.go:141] libmachine: STDOUT: 
	I0731 12:40:25.266128   10835 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:25.266158   10835 fix.go:56] duration metric: took 14.30125ms for fixHost
	I0731 12:40:25.266162   10835 start.go:83] releasing machines lock for "newest-cni-207000", held for 14.314625ms
	W0731 12:40:25.266171   10835 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:25.266212   10835 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:25.266217   10835 start.go:729] Will try again in 5 seconds ...
	I0731 12:40:30.268335   10835 start.go:360] acquireMachinesLock for newest-cni-207000: {Name:mk01a5d6e8bbef2a21ed9730af3c664c6e667055 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:40:30.268948   10835 start.go:364] duration metric: took 504.875µs to acquireMachinesLock for "newest-cni-207000"
	I0731 12:40:30.269088   10835 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:40:30.269110   10835 fix.go:54] fixHost starting: 
	I0731 12:40:30.269874   10835 fix.go:112] recreateIfNeeded on newest-cni-207000: state=Stopped err=<nil>
	W0731 12:40:30.269902   10835 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:40:30.274352   10835 out.go:177] * Restarting existing qemu2 VM for "newest-cni-207000" ...
	I0731 12:40:30.282169   10835 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:40:30.282406   10835 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:02:db:1e:e6:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19355-6494/.minikube/machines/newest-cni-207000/disk.qcow2
	I0731 12:40:30.292304   10835 main.go:141] libmachine: STDOUT: 
	I0731 12:40:30.292365   10835 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:40:30.292456   10835 fix.go:56] duration metric: took 23.349834ms for fixHost
	I0731 12:40:30.292470   10835 start.go:83] releasing machines lock for "newest-cni-207000", held for 23.49875ms
	W0731 12:40:30.292631   10835 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-207000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-207000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:40:30.301354   10835 out.go:177] 
	W0731 12:40:30.302811   10835 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:40:30.302874   10835 out.go:239] * 
	* 
	W0731 12:40:30.305468   10835 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:40:30.318374   10835 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-207000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000: exit status 7 (68.322959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-819000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000: exit status 7 (31.716959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-819000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-819000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-819000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.389041ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-819000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-819000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000: exit status 7 (28.804375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-819000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000: exit status 7 (27.942333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-819000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-819000 --alsologtostderr -v=1: exit status 83 (39.474666ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-819000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-819000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:26.845086   10854 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:26.845231   10854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:26.845234   10854 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:26.845237   10854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:26.845360   10854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:40:26.845591   10854 out.go:298] Setting JSON to false
	I0731 12:40:26.845597   10854 mustload.go:65] Loading cluster: default-k8s-diff-port-819000
	I0731 12:40:26.845781   10854 config.go:182] Loaded profile config "default-k8s-diff-port-819000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:40:26.849404   10854 out.go:177] * The control-plane node default-k8s-diff-port-819000 host is not running: state=Stopped
	I0731 12:40:26.853454   10854 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-819000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-819000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000: exit status 7 (28.197875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-819000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000: exit status 7 (28.55225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-819000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-207000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000: exit status 7 (29.554583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-207000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-207000 --alsologtostderr -v=1: exit status 83 (41.709666ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-207000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-207000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:40:30.497806   10878 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:40:30.497966   10878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:30.497970   10878 out.go:304] Setting ErrFile to fd 2...
	I0731 12:40:30.497972   10878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:40:30.498107   10878 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:40:30.498347   10878 out.go:298] Setting JSON to false
	I0731 12:40:30.498352   10878 mustload.go:65] Loading cluster: newest-cni-207000
	I0731 12:40:30.498550   10878 config.go:182] Loaded profile config "newest-cni-207000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:40:30.502477   10878 out.go:177] * The control-plane node newest-cni-207000 host is not running: state=Stopped
	I0731 12:40:30.506464   10878 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-207000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-207000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000: exit status 7 (30.180125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-207000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000: exit status 7 (29.227334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-207000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 11.53
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.1
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 11.54
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.1
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.29
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.15
48 TestErrorSpam/start 0.38
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.11
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 9.48
55 TestFunctional/serial/CopySyncFile 0.01
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.72
64 TestFunctional/serial/CacheCmd/cache/add_local 1.06
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.33
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.7
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
135 TestFunctional/parallel/ProfileCmd/profile_list 0.08
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.17
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
247 TestStoppedBinaryUpgrade/Setup 0.85
249 TestStoppedBinaryUpgrade/MinikubeLogs 0.73
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
267 TestNoKubernetes/serial/ProfileList 0.1
268 TestNoKubernetes/serial/Stop 3.04
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
284 TestStartStop/group/old-k8s-version/serial/Stop 3.97
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
295 TestStartStop/group/no-preload/serial/Stop 3.17
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
306 TestStartStop/group/embed-certs/serial/Stop 2.08
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.83
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
324 TestStartStop/group/newest-cni/serial/Stop 3.36
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-203000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-203000: exit status 85 (96.118583ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:13 PDT |          |
	|         | -p download-only-203000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 12:13:56
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:13:56.318827    6988 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:56.318973    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:56.318977    6988 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:56.318979    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:56.319113    6988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	W0731 12:13:56.319201    6988 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19355-6494/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19355-6494/.minikube/config/config.json: no such file or directory
	I0731 12:13:56.320595    6988 out.go:298] Setting JSON to true
	I0731 12:13:56.338196    6988 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4399,"bootTime":1722448837,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:13:56.338267    6988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:13:56.342474    6988 out.go:97] [download-only-203000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:13:56.342590    6988 notify.go:220] Checking for updates...
	W0731 12:13:56.342642    6988 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 12:13:56.345509    6988 out.go:169] MINIKUBE_LOCATION=19355
	I0731 12:13:56.348559    6988 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:13:56.353457    6988 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:13:56.356479    6988 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:13:56.359452    6988 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	W0731 12:13:56.365438    6988 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 12:13:56.365651    6988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:13:56.368455    6988 out.go:97] Using the qemu2 driver based on user configuration
	I0731 12:13:56.368475    6988 start.go:297] selected driver: qemu2
	I0731 12:13:56.368489    6988 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:13:56.368556    6988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:13:56.371382    6988 out.go:169] Automatically selected the socket_vmnet network
	I0731 12:13:56.376768    6988 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 12:13:56.376866    6988 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:13:56.376932    6988 cni.go:84] Creating CNI manager for ""
	I0731 12:13:56.376949    6988 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:13:56.376997    6988 start.go:340] cluster config:
	{Name:download-only-203000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-203000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:56.381001    6988 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:56.384503    6988 out.go:97] Downloading VM boot image ...
	I0731 12:13:56.384522    6988 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/iso/arm64/minikube-v1.33.1-1722420371-19355-arm64.iso
	I0731 12:14:02.974785    6988 out.go:97] Starting "download-only-203000" primary control-plane node in "download-only-203000" cluster
	I0731 12:14:02.974811    6988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:14:03.032310    6988 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:14:03.032317    6988 cache.go:56] Caching tarball of preloaded images
	I0731 12:14:03.032465    6988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:14:03.037553    6988 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 12:14:03.037559    6988 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:03.127820    6988 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:14:09.805636    6988 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:09.805790    6988 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:10.500410    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:14:10.500603    6988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/download-only-203000/config.json ...
	I0731 12:14:10.500620    6988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/download-only-203000/config.json: {Name:mkb14538d37fd692acda6d82852c4f68bca74e6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:14:10.500864    6988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:14:10.501082    6988 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0731 12:14:10.903793    6988 out.go:169] 
	W0731 12:14:10.910994    6988 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19355-6494/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60 0x1044dda60] Decompressors:map[bz2:0x1400000ee90 gz:0x1400000ee98 tar:0x1400000ee40 tar.bz2:0x1400000ee50 tar.gz:0x1400000ee60 tar.xz:0x1400000ee70 tar.zst:0x1400000ee80 tbz2:0x1400000ee50 tgz:0x1400000ee60 txz:0x1400000ee70 tzst:0x1400000ee80 xz:0x1400000eea0 zip:0x1400000eeb0 zst:0x1400000eea8] Getters:map[file:0x1400090e6d0 http:0x1400070a370 https:0x1400070a3c0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 12:14:10.911023    6988 out_reason.go:110] 
	W0731 12:14:10.917863    6988 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:14:10.921821    6988 out.go:169] 
	
	
	* The control-plane node download-only-203000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-203000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-203000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (11.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-323000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-323000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (11.529285542s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (11.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-323000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-323000: exit status 85 (79.830792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:13 PDT |                     |
	|         | -p download-only-203000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| delete  | -p download-only-203000        | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| start   | -o=json --download-only        | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | -p download-only-323000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 12:14:11
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:14:11.335171    7014 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:14:11.335300    7014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:11.335304    7014 out.go:304] Setting ErrFile to fd 2...
	I0731 12:14:11.335306    7014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:11.335417    7014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:14:11.336485    7014 out.go:298] Setting JSON to true
	I0731 12:14:11.352617    7014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4414,"bootTime":1722448837,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:14:11.352701    7014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:14:11.357847    7014 out.go:97] [download-only-323000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:14:11.357983    7014 notify.go:220] Checking for updates...
	I0731 12:14:11.361652    7014 out.go:169] MINIKUBE_LOCATION=19355
	I0731 12:14:11.364867    7014 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:14:11.368825    7014 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:14:11.370350    7014 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:14:11.373797    7014 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	W0731 12:14:11.379786    7014 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 12:14:11.379974    7014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:14:11.387819    7014 out.go:97] Using the qemu2 driver based on user configuration
	I0731 12:14:11.387831    7014 start.go:297] selected driver: qemu2
	I0731 12:14:11.387836    7014 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:14:11.387912    7014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:14:11.390776    7014 out.go:169] Automatically selected the socket_vmnet network
	I0731 12:14:11.396102    7014 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 12:14:11.396184    7014 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:14:11.396201    7014 cni.go:84] Creating CNI manager for ""
	I0731 12:14:11.396210    7014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:14:11.396215    7014 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:14:11.396252    7014 start.go:340] cluster config:
	{Name:download-only-323000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:14:11.399801    7014 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:14:11.402721    7014 out.go:97] Starting "download-only-323000" primary control-plane node in "download-only-323000" cluster
	I0731 12:14:11.402727    7014 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:14:11.486478    7014 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:14:11.486503    7014 cache.go:56] Caching tarball of preloaded images
	I0731 12:14:11.486691    7014 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:14:11.490995    7014 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 12:14:11.491005    7014 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:11.571723    7014 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:14:18.367501    7014 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:18.367693    7014 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:18.911474    7014 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:14:18.911663    7014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/download-only-323000/config.json ...
	I0731 12:14:18.911679    7014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/download-only-323000/config.json: {Name:mk44cb09b9e449122cbe97579f7c86085ee497fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:14:18.911915    7014 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:14:18.912041    7014 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-323000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-323000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-323000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (11.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-745000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-745000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (11.53598175s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (11.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-745000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-745000: exit status 85 (75.141292ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:13 PDT |                     |
	|         | -p download-only-203000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| delete  | -p download-only-203000             | download-only-203000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| start   | -o=json --download-only             | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | -p download-only-323000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| delete  | -p download-only-323000             | download-only-323000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT | 31 Jul 24 12:14 PDT |
	| start   | -o=json --download-only             | download-only-745000 | jenkins | v1.33.1 | 31 Jul 24 12:14 PDT |                     |
	|         | -p download-only-745000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 12:14:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:14:23.149410    7036 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:14:23.149545    7036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:23.149548    7036 out.go:304] Setting ErrFile to fd 2...
	I0731 12:14:23.149550    7036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:23.149674    7036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:14:23.150735    7036 out.go:298] Setting JSON to true
	I0731 12:14:23.166887    7036 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4426,"bootTime":1722448837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:14:23.166951    7036 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:14:23.171867    7036 out.go:97] [download-only-745000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:14:23.171986    7036 notify.go:220] Checking for updates...
	I0731 12:14:23.175796    7036 out.go:169] MINIKUBE_LOCATION=19355
	I0731 12:14:23.179841    7036 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:14:23.183862    7036 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:14:23.185360    7036 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:14:23.188856    7036 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	W0731 12:14:23.194824    7036 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 12:14:23.194972    7036 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:14:23.197846    7036 out.go:97] Using the qemu2 driver based on user configuration
	I0731 12:14:23.197856    7036 start.go:297] selected driver: qemu2
	I0731 12:14:23.197860    7036 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:14:23.197912    7036 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:14:23.200859    7036 out.go:169] Automatically selected the socket_vmnet network
	I0731 12:14:23.206151    7036 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 12:14:23.206240    7036 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:14:23.206275    7036 cni.go:84] Creating CNI manager for ""
	I0731 12:14:23.206282    7036 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:14:23.206287    7036 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:14:23.206325    7036 start.go:340] cluster config:
	{Name:download-only-745000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-745000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:14:23.209764    7036 iso.go:125] acquiring lock: {Name:mka061849733a523aab10cfd6a2b7e7835445c46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:14:23.212846    7036 out.go:97] Starting "download-only-745000" primary control-plane node in "download-only-745000" cluster
	I0731 12:14:23.212854    7036 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:14:23.268279    7036 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:14:23.268296    7036 cache.go:56] Caching tarball of preloaded images
	I0731 12:14:23.268470    7036 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:14:23.271849    7036 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 12:14:23.271857    7036 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:23.348076    7036 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:14:29.974720    7036 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:29.974888    7036 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 12:14:30.494382    7036 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 12:14:30.494570    7036 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/download-only-745000/config.json ...
	I0731 12:14:30.494587    7036 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19355-6494/.minikube/profiles/download-only-745000/config.json: {Name:mkc08688c3418740f98af2f1e67c718d5dbf2504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:14:30.494830    7036 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:14:30.494962    7036 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19355-6494/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-745000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-745000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-745000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-399000 --alsologtostderr --binary-mirror http://127.0.0.1:51047 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-399000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-399000
--- PASS: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-565000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-565000: exit status 85 (55.419083ms)

                                                
                                                
-- stdout --
	* Profile "addons-565000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-565000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-565000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-565000: exit status 85 (59.426334ms)

                                                
                                                
-- stdout --
	* Profile "addons-565000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-565000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.15s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 status: exit status 7 (30.056917ms)

                                                
                                                
-- stdout --
	nospam-249000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 status: exit status 7 (29.378625ms)

                                                
                                                
-- stdout --
	nospam-249000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 status: exit status 7 (29.739292ms)

                                                
                                                
-- stdout --
	nospam-249000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 pause: exit status 83 (38.897834ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-249000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-249000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 pause: exit status 83 (38.945958ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-249000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-249000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 pause: exit status 83 (35.753125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-249000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-249000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.11s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 unpause: exit status 83 (39.668625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-249000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-249000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 unpause: exit status 83 (38.072542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-249000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-249000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 unpause: exit status 83 (39.797333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-249000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-249000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 stop: (2.968142792s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 stop: (3.2207085s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-249000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-249000 stop: (3.287652208s)
--- PASS: TestErrorSpam/stop (9.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19355-6494/.minikube/files/etc/test/nested/copy/6986/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-373000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2742291883/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 cache add minikube-local-cache-test:functional-373000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 cache delete minikube-local-cache-test:functional-373000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-373000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 config get cpus: exit status 14 (28.138625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 config get cpus: exit status 14 (36.316791ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-373000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-373000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (159.1185ms)

                                                
                                                
-- stdout --
	* [functional-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:16:11.366275    7619 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:16:11.366507    7619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:11.366512    7619 out.go:304] Setting ErrFile to fd 2...
	I0731 12:16:11.366516    7619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:11.366703    7619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:16:11.368128    7619 out.go:298] Setting JSON to false
	I0731 12:16:11.388696    7619 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4534,"bootTime":1722448837,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:16:11.388768    7619 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:16:11.393129    7619 out.go:177] * [functional-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:16:11.400122    7619 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:16:11.400161    7619 notify.go:220] Checking for updates...
	I0731 12:16:11.407047    7619 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:16:11.410082    7619 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:16:11.413006    7619 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:16:11.416063    7619 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:16:11.419053    7619 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:16:11.420648    7619 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:16:11.420960    7619 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:16:11.424999    7619 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:16:11.431919    7619 start.go:297] selected driver: qemu2
	I0731 12:16:11.431927    7619 start.go:901] validating driver "qemu2" against &{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:16:11.431991    7619 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:16:11.438052    7619 out.go:177] 
	W0731 12:16:11.442086    7619 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 12:16:11.446024    7619 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-373000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-373000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-373000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.814834ms)

                                                
                                                
-- stdout --
	* [functional-373000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:16:11.596049    7630 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:16:11.596160    7630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:11.596163    7630 out.go:304] Setting ErrFile to fd 2...
	I0731 12:16:11.596172    7630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:16:11.596302    7630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19355-6494/.minikube/bin
	I0731 12:16:11.597673    7630 out.go:298] Setting JSON to false
	I0731 12:16:11.614391    7630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4534,"bootTime":1722448837,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0731 12:16:11.614461    7630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:16:11.619075    7630 out.go:177] * [functional-373000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0731 12:16:11.626061    7630 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 12:16:11.626113    7630 notify.go:220] Checking for updates...
	I0731 12:16:11.633025    7630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	I0731 12:16:11.636082    7630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:16:11.639036    7630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:16:11.642072    7630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	I0731 12:16:11.645074    7630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:16:11.648321    7630 config.go:182] Loaded profile config "functional-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:16:11.648562    7630 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:16:11.653028    7630 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0731 12:16:11.659994    7630 start.go:297] selected driver: qemu2
	I0731 12:16:11.660000    7630 start.go:901] validating driver "qemu2" against &{Name:functional-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:16:11.660054    7630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:16:11.666034    7630 out.go:177] 
	W0731 12:16:11.669932    7630 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 12:16:11.674022    7630 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.670599958s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-373000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-373000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image rm docker.io/kicbase/echo-server:functional-373000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-373000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 image save --daemon docker.io/kicbase/echo-server:functional-373000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-373000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "49.9075ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "31.842541ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "44.983958ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "31.80625ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.011020417s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-373000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-373000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-373000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-373000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.17s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-827000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-827000 --output=json --user=testUser: (3.169821625s)
--- PASS: TestJSONOutput/stop/Command (3.17s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-703000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-703000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.86125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5efb54c2-1b39-4e7c-ab2d-5a610434728e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-703000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e3833c5-1393-4ffc-95a8-b14397e5e35b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19355"}}
	{"specversion":"1.0","id":"b324fb27-b86a-49eb-98f8-8488b85323e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig"}}
	{"specversion":"1.0","id":"1280ded0-cf21-4b98-b4bb-3578c6e5e54e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"6a412640-6781-46ed-b17b-7ef3e13cca39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1d069561-af06-422d-95a9-c5720a69e8af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube"}}
	{"specversion":"1.0","id":"f043948f-eec6-44a1-beed-d757d66d0479","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6f70f4f6-47e5-45dd-b8ec-c08e07cb09b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-703000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-703000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-155000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-911000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (102.908584ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-911000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19355
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19355-6494/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19355-6494/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-911000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-911000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.569875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-911000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-911000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-911000: (3.039560084s)
--- PASS: TestNoKubernetes/serial/Stop (3.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-911000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-911000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (46.1665ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-911000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-629000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-629000 --alsologtostderr -v=3: (3.968628542s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-629000 -n old-k8s-version-629000: exit status 7 (58.607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-629000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-592000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-592000 --alsologtostderr -v=3: (3.167717875s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-592000 -n no-preload-592000: exit status 7 (57.216625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-592000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-401000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-401000 --alsologtostderr -v=3: (2.083661042s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-401000 -n embed-certs-401000: exit status 7 (53.602541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-401000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-819000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-819000 --alsologtostderr -v=3: (1.832609708s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-819000 -n default-k8s-diff-port-819000: exit status 7 (58.714333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-819000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-207000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-207000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-207000 --alsologtostderr -v=3: (3.35804925s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-207000 -n newest-cni-207000: exit status 7 (54.414792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-207000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-373000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3109818576/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722453336540938000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3109818576/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722453336540938000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3109818576/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722453336540938000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3109818576/001/test-1722453336540938000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (57.471584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.019084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.675459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (80.31375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.935667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.6635ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.388417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "sudo umount -f /mount-9p": exit status 83 (44.450875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-373000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-373000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3109818576/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-373000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1490467581/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.00025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.576708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.985791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.468208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.159583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.944291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.130042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "sudo umount -f /mount-9p": exit status 83 (42.999125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-373000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-373000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1490467581/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-373000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3152683499/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-373000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3152683499/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-373000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3152683499/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1: exit status 83 (82.731625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1: exit status 83 (86.310333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1: exit status 83 (86.909959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1: exit status 83 (86.41725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1: exit status 83 (84.595083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1: exit status 83 (85.443208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-373000 ssh "findmnt -T" /mount1: exit status 83 (86.123625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-373000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-373000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-373000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3152683499/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-373000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3152683499/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-373000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3152683499/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.06s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-452000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                
----------------------- debugLogs end: cilium-452000 [took: 2.191976917s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-452000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-452000
--- SKIP: TestNetworkPlugins/group/cilium (2.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-747000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-747000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard