Test Report: QEMU_macOS 19341

                    
                      9b97c7bfbeafe185e6db2e35612f0670b350ca0e:2024-07-29:35548
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.49
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.27
36 TestAddons/Setup 10.43
37 TestCertOptions 12
38 TestCertExpiration 197.35
39 TestDockerFlags 12.43
40 TestForceSystemdFlag 12.66
41 TestForceSystemdEnv 10.25
47 TestErrorSpam/setup 9.78
56 TestFunctional/serial/StartWithProxy 9.93
58 TestFunctional/serial/SoftStart 5.27
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
70 TestFunctional/serial/MinikubeKubectlCmd 0.73
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.97
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.16
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.13
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
108 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 97.24
109 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
110 TestFunctional/parallel/ServiceCmd/List 0.04
111 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
112 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
113 TestFunctional/parallel/ServiceCmd/Format 0.04
114 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/Version/components 0.04
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
127 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.31
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
136 TestFunctional/parallel/DockerEnv/bash 0.05
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 39.86
150 TestMultiControlPlane/serial/StartCluster 10.04
151 TestMultiControlPlane/serial/DeployApp 80.34
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 47.3
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.2
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.07
164 TestMultiControlPlane/serial/StopCluster 3.57
165 TestMultiControlPlane/serial/RestartCluster 5.26
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 10.03
174 TestJSONOutput/start/Command 9.87
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.18
206 TestMountStart/serial/StartWithMountFirst 10.08
209 TestMultiNode/serial/FreshStart2Nodes 9.86
210 TestMultiNode/serial/DeployApp2Nodes 100.5
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 55.2
218 TestMultiNode/serial/RestartKeepsNodes 8.33
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 3.63
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 22.29
226 TestPreload 10.11
228 TestScheduledStopUnix 10.05
229 TestSkaffold 12.3
232 TestRunningBinaryUpgrade 587.89
234 TestKubernetesUpgrade 17.46
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.94
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.19
250 TestStoppedBinaryUpgrade/Upgrade 573.38
252 TestPause/serial/Start 10
262 TestNoKubernetes/serial/StartWithK8s 9.75
263 TestNoKubernetes/serial/StartWithStopK8s 5.29
264 TestNoKubernetes/serial/Start 5.29
268 TestNoKubernetes/serial/StartNoArgs 5.34
270 TestNetworkPlugins/group/auto/Start 9.82
271 TestNetworkPlugins/group/kindnet/Start 9.89
272 TestNetworkPlugins/group/calico/Start 9.91
273 TestNetworkPlugins/group/custom-flannel/Start 10.03
274 TestNetworkPlugins/group/false/Start 9.79
275 TestNetworkPlugins/group/enable-default-cni/Start 9.92
276 TestNetworkPlugins/group/flannel/Start 9.82
277 TestNetworkPlugins/group/bridge/Start 9.88
278 TestNetworkPlugins/group/kubenet/Start 10.15
280 TestStartStop/group/old-k8s-version/serial/FirstStart 9.83
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 9.95
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
297 TestStartStop/group/no-preload/serial/SecondStart 5.23
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 10.01
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.48
306 TestStartStop/group/embed-certs/serial/DeployApp 0.1
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.14
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
313 TestStartStop/group/embed-certs/serial/SecondStart 5.26
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.67
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/embed-certs/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/FirstStart 10.22
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (12.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-753000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-753000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (12.485995667s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"04c82979-542a-4505-8a24-304c3acda6b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-753000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7438a1b1-f887-4848-b507-33f4b458b767","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19341"}}
	{"specversion":"1.0","id":"71436dac-ab01-405b-afb9-cb8ae700512f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig"}}
	{"specversion":"1.0","id":"74dcad26-638d-434a-8b48-a23ef270f291","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0504f5d7-0a59-491d-b807-6b4c18f6bbb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a2bf969b-7944-453e-857c-efa9e1ad2a67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube"}}
	{"specversion":"1.0","id":"c7cb41a2-2cc8-448f-ad6e-1d01e4f163dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"ebaddd81-5e45-4d68-a9b0-cae393cf1a3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f376948-e79e-41be-b780-29a0bcc2b52b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"d2d54ade-5285-4fd7-b974-f747e6b68bff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"85d89842-a45a-4595-a240-b1e8932b8beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-753000\" primary control-plane node in \"download-only-753000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"518e21d9-82fb-4004-98e2-cbbcce53a3f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e223e97e-ac6d-41ad-8b6e-80063c0fb2a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106889a60 0x106889a60 0x106889a60 0x106889a60 0x106889a60 0x106889a60 0x106889a60] Decompressors:map[bz2:0x1400069f9f0 gz:0x1400069f9f8 tar:0x1400069f9a0 tar.bz2:0x1400069f9b0 tar.gz:0x1400069f9c0 tar.xz:0x1400069f9d0 tar.zst:0x1400069f9e0 tbz2:0x1400069f9b0 tgz:0x1
400069f9c0 txz:0x1400069f9d0 tzst:0x1400069f9e0 xz:0x1400069fa00 zip:0x1400069fa10 zst:0x1400069fa08] Getters:map[file:0x140008fc6e0 http:0x140006388c0 https:0x14000638910] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"b68b450a-f943-47fc-b5fb-63ebe3ba0980","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:16:08.791616   15975 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:16:08.791763   15975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:08.791766   15975 out.go:304] Setting ErrFile to fd 2...
	I0729 04:16:08.791769   15975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:08.791903   15975 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	W0729 04:16:08.791992   15975 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19341-15486/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19341-15486/.minikube/config/config.json: no such file or directory
	I0729 04:16:08.793308   15975 out.go:298] Setting JSON to true
	I0729 04:16:08.810544   15975 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8137,"bootTime":1722243631,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:16:08.810613   15975 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:16:08.816379   15975 out.go:97] [download-only-753000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:16:08.816566   15975 notify.go:220] Checking for updates...
	W0729 04:16:08.816621   15975 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 04:16:08.818201   15975 out.go:169] MINIKUBE_LOCATION=19341
	I0729 04:16:08.823624   15975 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:16:08.827842   15975 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:16:08.832962   15975 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:16:08.835875   15975 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	W0729 04:16:08.841784   15975 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 04:16:08.841996   15975 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:16:08.845136   15975 out.go:97] Using the qemu2 driver based on user configuration
	I0729 04:16:08.845153   15975 start.go:297] selected driver: qemu2
	I0729 04:16:08.845168   15975 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:16:08.845224   15975 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:16:08.849503   15975 out.go:169] Automatically selected the socket_vmnet network
	I0729 04:16:08.853344   15975 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 04:16:08.853448   15975 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:16:08.853517   15975 cni.go:84] Creating CNI manager for ""
	I0729 04:16:08.853534   15975 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 04:16:08.853580   15975 start.go:340] cluster config:
	{Name:download-only-753000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:16:08.857482   15975 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:16:08.861851   15975 out.go:97] Downloading VM boot image ...
	I0729 04:16:08.861867   15975 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 04:16:13.822716   15975 out.go:97] Starting "download-only-753000" primary control-plane node in "download-only-753000" cluster
	I0729 04:16:13.822741   15975 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:16:13.878505   15975 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:16:13.878513   15975 cache.go:56] Caching tarball of preloaded images
	I0729 04:16:13.878672   15975 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:16:13.883732   15975 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 04:16:13.883741   15975 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:16:13.959132   15975 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:16:20.113895   15975 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:16:20.114057   15975 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:16:20.810777   15975 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 04:16:20.810995   15975 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/download-only-753000/config.json ...
	I0729 04:16:20.811013   15975 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/download-only-753000/config.json: {Name:mk53306436022dc1bb9c5bc61fd40e745b54e730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:16:20.812579   15975 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:16:20.813060   15975 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 04:16:21.199695   15975 out.go:169] 
	W0729 04:16:21.203737   15975 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106889a60 0x106889a60 0x106889a60 0x106889a60 0x106889a60 0x106889a60 0x106889a60] Decompressors:map[bz2:0x1400069f9f0 gz:0x1400069f9f8 tar:0x1400069f9a0 tar.bz2:0x1400069f9b0 tar.gz:0x1400069f9c0 tar.xz:0x1400069f9d0 tar.zst:0x1400069f9e0 tbz2:0x1400069f9b0 tgz:0x1400069f9c0 txz:0x1400069f9d0 tzst:0x1400069f9e0 xz:0x1400069fa00 zip:0x1400069fa10 zst:0x1400069fa08] Getters:map[file:0x140008fc6e0 http:0x140006388c0 https:0x14000638910] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 04:16:21.203766   15975 out_reason.go:110] 
	W0729 04:16:21.211691   15975 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:16:21.214630   15975 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-753000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (12.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-179000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-179000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.117165542s)

                                                
                                                
-- stdout --
	* [offline-docker-179000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-179000" primary control-plane node in "offline-docker-179000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-179000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:27:31.241864   17812 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:27:31.242011   17812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:27:31.242015   17812 out.go:304] Setting ErrFile to fd 2...
	I0729 04:27:31.242017   17812 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:27:31.242131   17812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:27:31.243365   17812 out.go:298] Setting JSON to false
	I0729 04:27:31.260815   17812 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8820,"bootTime":1722243631,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:27:31.260886   17812 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:27:31.265114   17812 out.go:177] * [offline-docker-179000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:27:31.271146   17812 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:27:31.271160   17812 notify.go:220] Checking for updates...
	I0729 04:27:31.277074   17812 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:27:31.280140   17812 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:27:31.283152   17812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:27:31.286099   17812 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:27:31.289129   17812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:27:31.292448   17812 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:27:31.292503   17812 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:27:31.296090   17812 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:27:31.303093   17812 start.go:297] selected driver: qemu2
	I0729 04:27:31.303104   17812 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:27:31.303111   17812 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:27:31.304990   17812 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:27:31.308112   17812 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:27:31.311207   17812 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:27:31.311221   17812 cni.go:84] Creating CNI manager for ""
	I0729 04:27:31.311231   17812 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:27:31.311239   17812 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:27:31.311273   17812 start.go:340] cluster config:
	{Name:offline-docker-179000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-179000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:27:31.314965   17812 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:27:31.322112   17812 out.go:177] * Starting "offline-docker-179000" primary control-plane node in "offline-docker-179000" cluster
	I0729 04:27:31.325906   17812 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:27:31.325933   17812 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:27:31.325939   17812 cache.go:56] Caching tarball of preloaded images
	I0729 04:27:31.326008   17812 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:27:31.326013   17812 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:27:31.326089   17812 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/offline-docker-179000/config.json ...
	I0729 04:27:31.326099   17812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/offline-docker-179000/config.json: {Name:mka8ef0fe4f118dbb38886170117d59e4ddc32c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:27:31.326394   17812 start.go:360] acquireMachinesLock for offline-docker-179000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:27:31.326425   17812 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "offline-docker-179000"
	I0729 04:27:31.326436   17812 start.go:93] Provisioning new machine with config: &{Name:offline-docker-179000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-179000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:27:31.326464   17812 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:27:31.330148   17812 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:27:31.345817   17812 start.go:159] libmachine.API.Create for "offline-docker-179000" (driver="qemu2")
	I0729 04:27:31.345845   17812 client.go:168] LocalClient.Create starting
	I0729 04:27:31.345920   17812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:27:31.345950   17812 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:31.345961   17812 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:31.346005   17812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:27:31.346031   17812 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:31.346040   17812 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:31.346389   17812 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:27:31.493621   17812 main.go:141] libmachine: Creating SSH key...
	I0729 04:27:31.655953   17812 main.go:141] libmachine: Creating Disk image...
	I0729 04:27:31.655965   17812 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:27:31.656295   17812 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2
	I0729 04:27:31.674104   17812 main.go:141] libmachine: STDOUT: 
	I0729 04:27:31.674135   17812 main.go:141] libmachine: STDERR: 
	I0729 04:27:31.674232   17812 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2 +20000M
	I0729 04:27:31.684856   17812 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:27:31.684885   17812 main.go:141] libmachine: STDERR: 
	I0729 04:27:31.684909   17812 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2
	I0729 04:27:31.684917   17812 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:27:31.684938   17812 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:27:31.684969   17812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:59:24:c6:21:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2
	I0729 04:27:31.687116   17812 main.go:141] libmachine: STDOUT: 
	I0729 04:27:31.687137   17812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:27:31.687157   17812 client.go:171] duration metric: took 341.315166ms to LocalClient.Create
	I0729 04:27:33.688382   17812 start.go:128] duration metric: took 2.3619595s to createHost
	I0729 04:27:33.688442   17812 start.go:83] releasing machines lock for "offline-docker-179000", held for 2.362070916s
	W0729 04:27:33.688461   17812 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:33.721361   17812 out.go:177] * Deleting "offline-docker-179000" in qemu2 ...
	W0729 04:27:33.732581   17812 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:33.732596   17812 start.go:729] Will try again in 5 seconds ...
	I0729 04:27:38.734523   17812 start.go:360] acquireMachinesLock for offline-docker-179000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:27:38.734595   17812 start.go:364] duration metric: took 51.416µs to acquireMachinesLock for "offline-docker-179000"
	I0729 04:27:38.734613   17812 start.go:93] Provisioning new machine with config: &{Name:offline-docker-179000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-179000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:27:38.734680   17812 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:27:38.745209   17812 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:27:38.760252   17812 start.go:159] libmachine.API.Create for "offline-docker-179000" (driver="qemu2")
	I0729 04:27:38.760280   17812 client.go:168] LocalClient.Create starting
	I0729 04:27:38.760341   17812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:27:38.760376   17812 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:38.760385   17812 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:38.760422   17812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:27:38.760445   17812 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:38.760453   17812 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:38.760731   17812 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:27:39.116168   17812 main.go:141] libmachine: Creating SSH key...
	I0729 04:27:39.265782   17812 main.go:141] libmachine: Creating Disk image...
	I0729 04:27:39.265792   17812 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:27:39.266044   17812 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2
	I0729 04:27:39.279112   17812 main.go:141] libmachine: STDOUT: 
	I0729 04:27:39.279203   17812 main.go:141] libmachine: STDERR: 
	I0729 04:27:39.279267   17812 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2 +20000M
	I0729 04:27:39.287285   17812 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:27:39.287300   17812 main.go:141] libmachine: STDERR: 
	I0729 04:27:39.287309   17812 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2
	I0729 04:27:39.287322   17812 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:27:39.287335   17812 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:27:39.287375   17812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:fd:da:31:6a:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/offline-docker-179000/disk.qcow2
	I0729 04:27:39.289052   17812 main.go:141] libmachine: STDOUT: 
	I0729 04:27:39.289068   17812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:27:39.289079   17812 client.go:171] duration metric: took 528.808541ms to LocalClient.Create
	I0729 04:27:41.291297   17812 start.go:128] duration metric: took 2.556648667s to createHost
	I0729 04:27:41.291360   17812 start.go:83] releasing machines lock for "offline-docker-179000", held for 2.556817583s
	W0729 04:27:41.291708   17812 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-179000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-179000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:41.299390   17812 out.go:177] 
	W0729 04:27:41.303446   17812 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:27:41.303480   17812 out.go:239] * 
	* 
	W0729 04:27:41.306230   17812 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:27:41.315245   17812 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-179000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-29 04:27:41.331335 -0700 PDT m=+692.640117751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-179000 -n offline-docker-179000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-179000 -n offline-docker-179000: exit status 7 (67.630333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-179000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-179000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-179000
--- FAIL: TestOffline (10.27s)

                                                
                                    
x
+
TestAddons/Setup (10.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-621000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-621000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.426425125s)

                                                
                                                
-- stdout --
	* [addons-621000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-621000" primary control-plane node in "addons-621000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-621000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:16:36.559466   16088 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:16:36.559584   16088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:36.559588   16088 out.go:304] Setting ErrFile to fd 2...
	I0729 04:16:36.559590   16088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:36.559710   16088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:16:36.560729   16088 out.go:298] Setting JSON to false
	I0729 04:16:36.576698   16088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8165,"bootTime":1722243631,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:16:36.576768   16088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:16:36.578791   16088 out.go:177] * [addons-621000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:16:36.585544   16088 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:16:36.585607   16088 notify.go:220] Checking for updates...
	I0729 04:16:36.592491   16088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:16:36.595511   16088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:16:36.598494   16088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:16:36.601483   16088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:16:36.608479   16088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:16:36.612642   16088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:16:36.616576   16088 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:16:36.624555   16088 start.go:297] selected driver: qemu2
	I0729 04:16:36.624564   16088 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:16:36.624570   16088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:16:36.626846   16088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:16:36.630526   16088 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:16:36.634557   16088 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:16:36.634573   16088 cni.go:84] Creating CNI manager for ""
	I0729 04:16:36.634581   16088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:16:36.634586   16088 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:16:36.634618   16088 start.go:340] cluster config:
	{Name:addons-621000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-621000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:16:36.638282   16088 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:16:36.647512   16088 out.go:177] * Starting "addons-621000" primary control-plane node in "addons-621000" cluster
	I0729 04:16:36.651538   16088 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:16:36.651551   16088 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:16:36.651558   16088 cache.go:56] Caching tarball of preloaded images
	I0729 04:16:36.651615   16088 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:16:36.651621   16088 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:16:36.651855   16088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/addons-621000/config.json ...
	I0729 04:16:36.651866   16088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/addons-621000/config.json: {Name:mk6ed8a5afc326dc4b814f4affe774a5372e54f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:16:36.652292   16088 start.go:360] acquireMachinesLock for addons-621000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:16:36.652367   16088 start.go:364] duration metric: took 68.417µs to acquireMachinesLock for "addons-621000"
	I0729 04:16:36.652380   16088 start.go:93] Provisioning new machine with config: &{Name:addons-621000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-621000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:16:36.652406   16088 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:16:36.657504   16088 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 04:16:36.676748   16088 start.go:159] libmachine.API.Create for "addons-621000" (driver="qemu2")
	I0729 04:16:36.676775   16088 client.go:168] LocalClient.Create starting
	I0729 04:16:36.676895   16088 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:16:36.791079   16088 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:16:36.887521   16088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:16:37.344666   16088 main.go:141] libmachine: Creating SSH key...
	I0729 04:16:37.412636   16088 main.go:141] libmachine: Creating Disk image...
	I0729 04:16:37.412653   16088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:16:37.412857   16088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2
	I0729 04:16:37.421904   16088 main.go:141] libmachine: STDOUT: 
	I0729 04:16:37.421927   16088 main.go:141] libmachine: STDERR: 
	I0729 04:16:37.421976   16088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2 +20000M
	I0729 04:16:37.429813   16088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:16:37.429826   16088 main.go:141] libmachine: STDERR: 
	I0729 04:16:37.429837   16088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2
	I0729 04:16:37.429842   16088 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:16:37.429860   16088 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:16:37.429881   16088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:3b:e1:00:ca:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2
	I0729 04:16:37.431399   16088 main.go:141] libmachine: STDOUT: 
	I0729 04:16:37.431412   16088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:16:37.431437   16088 client.go:171] duration metric: took 754.675292ms to LocalClient.Create
	I0729 04:16:39.433569   16088 start.go:128] duration metric: took 2.781208875s to createHost
	I0729 04:16:39.433634   16088 start.go:83] releasing machines lock for "addons-621000", held for 2.781323959s
	W0729 04:16:39.433686   16088 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:16:39.448773   16088 out.go:177] * Deleting "addons-621000" in qemu2 ...
	W0729 04:16:39.474129   16088 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:16:39.474158   16088 start.go:729] Will try again in 5 seconds ...
	I0729 04:16:44.476277   16088 start.go:360] acquireMachinesLock for addons-621000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:16:44.476687   16088 start.go:364] duration metric: took 326.291µs to acquireMachinesLock for "addons-621000"
	I0729 04:16:44.476810   16088 start.go:93] Provisioning new machine with config: &{Name:addons-621000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-621000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:16:44.477128   16088 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:16:44.485996   16088 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 04:16:44.534776   16088 start.go:159] libmachine.API.Create for "addons-621000" (driver="qemu2")
	I0729 04:16:44.534817   16088 client.go:168] LocalClient.Create starting
	I0729 04:16:44.534947   16088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:16:44.535018   16088 main.go:141] libmachine: Decoding PEM data...
	I0729 04:16:44.535034   16088 main.go:141] libmachine: Parsing certificate...
	I0729 04:16:44.535120   16088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:16:44.535166   16088 main.go:141] libmachine: Decoding PEM data...
	I0729 04:16:44.535180   16088 main.go:141] libmachine: Parsing certificate...
	I0729 04:16:44.535832   16088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:16:44.777388   16088 main.go:141] libmachine: Creating SSH key...
	I0729 04:16:44.896772   16088 main.go:141] libmachine: Creating Disk image...
	I0729 04:16:44.896778   16088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:16:44.896979   16088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2
	I0729 04:16:44.906180   16088 main.go:141] libmachine: STDOUT: 
	I0729 04:16:44.906198   16088 main.go:141] libmachine: STDERR: 
	I0729 04:16:44.906265   16088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2 +20000M
	I0729 04:16:44.914365   16088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:16:44.914385   16088 main.go:141] libmachine: STDERR: 
	I0729 04:16:44.914393   16088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2
	I0729 04:16:44.914397   16088 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:16:44.914407   16088 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:16:44.914427   16088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:36:90:3b:18:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/addons-621000/disk.qcow2
	I0729 04:16:44.916064   16088 main.go:141] libmachine: STDOUT: 
	I0729 04:16:44.916086   16088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:16:44.916098   16088 client.go:171] duration metric: took 381.286ms to LocalClient.Create
	I0729 04:16:46.918273   16088 start.go:128] duration metric: took 2.441133833s to createHost
	I0729 04:16:46.918375   16088 start.go:83] releasing machines lock for "addons-621000", held for 2.441724125s
	W0729 04:16:46.918734   16088 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-621000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-621000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:16:46.931699   16088 out.go:177] 
	W0729 04:16:46.936199   16088 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:16:46.936229   16088 out.go:239] * 
	* 
	W0729 04:16:46.937996   16088 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:16:46.947172   16088 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-621000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.43s)

                                                
                                    
x
+
TestCertOptions (12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-193000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-193000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (11.734642792s)

                                                
                                                
-- stdout --
	* [cert-options-193000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-193000" primary control-plane node in "cert-options-193000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-193000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-193000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-193000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-193000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-193000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.878ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-193000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-193000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-193000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-193000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-193000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.180291ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-193000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-193000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-193000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-193000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-29 04:28:16.04929 -0700 PDT m=+727.358924585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-193000 -n cert-options-193000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-193000 -n cert-options-193000: exit status 7 (29.511166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-193000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-193000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-193000
--- FAIL: TestCertOptions (12.00s)

                                                
                                    
x
+
TestCertExpiration (197.35s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-855000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-855000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.004604667s)

                                                
                                                
-- stdout --
	* [cert-expiration-855000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-855000" primary control-plane node in "cert-expiration-855000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-855000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-855000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-855000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-855000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-855000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.223002834s)

                                                
                                                
-- stdout --
	* [cert-expiration-855000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-855000" primary control-plane node in "cert-expiration-855000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-855000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-855000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-855000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-855000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-855000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-855000" primary control-plane node in "cert-expiration-855000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-855000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-855000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-855000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-29 04:31:18.784304 -0700 PDT m=+910.098422835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-855000 -n cert-expiration-855000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-855000 -n cert-expiration-855000: exit status 7 (48.937958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-855000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-855000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-855000
--- FAIL: TestCertExpiration (197.35s)

                                                
                                    
x
+
TestDockerFlags (12.43s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-060000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-060000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.035227125s)

                                                
                                                
-- stdout --
	* [docker-flags-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-060000" primary control-plane node in "docker-flags-060000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-060000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:27:51.761006   18014 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:27:51.761138   18014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:27:51.761142   18014 out.go:304] Setting ErrFile to fd 2...
	I0729 04:27:51.761144   18014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:27:51.761271   18014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:27:51.762249   18014 out.go:298] Setting JSON to false
	I0729 04:27:51.778845   18014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8840,"bootTime":1722243631,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:27:51.778911   18014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:27:51.793700   18014 out.go:177] * [docker-flags-060000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:27:51.801613   18014 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:27:51.801704   18014 notify.go:220] Checking for updates...
	I0729 04:27:51.809604   18014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:27:51.813646   18014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:27:51.816643   18014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:27:51.819611   18014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:27:51.822588   18014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:27:51.826022   18014 config.go:182] Loaded profile config "force-systemd-flag-006000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:27:51.826092   18014 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:27:51.826145   18014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:27:51.830582   18014 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:27:51.836746   18014 start.go:297] selected driver: qemu2
	I0729 04:27:51.836756   18014 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:27:51.836765   18014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:27:51.838986   18014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:27:51.842640   18014 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:27:51.845685   18014 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0729 04:27:51.845704   18014 cni.go:84] Creating CNI manager for ""
	I0729 04:27:51.845711   18014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:27:51.845713   18014 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:27:51.845755   18014 start.go:340] cluster config:
	{Name:docker-flags-060000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:27:51.849308   18014 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:27:51.856638   18014 out.go:177] * Starting "docker-flags-060000" primary control-plane node in "docker-flags-060000" cluster
	I0729 04:27:51.860613   18014 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:27:51.860629   18014 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:27:51.860639   18014 cache.go:56] Caching tarball of preloaded images
	I0729 04:27:51.860699   18014 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:27:51.860704   18014 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:27:51.860777   18014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/docker-flags-060000/config.json ...
	I0729 04:27:51.860788   18014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/docker-flags-060000/config.json: {Name:mkda9a4aad2fd24ce59b530c929a59c891f566fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:27:51.861015   18014 start.go:360] acquireMachinesLock for docker-flags-060000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:27:53.850852   18014 start.go:364] duration metric: took 1.989833792s to acquireMachinesLock for "docker-flags-060000"
	I0729 04:27:53.850964   18014 start.go:93] Provisioning new machine with config: &{Name:docker-flags-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:27:53.851365   18014 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:27:53.861019   18014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:27:53.912663   18014 start.go:159] libmachine.API.Create for "docker-flags-060000" (driver="qemu2")
	I0729 04:27:53.912702   18014 client.go:168] LocalClient.Create starting
	I0729 04:27:53.912834   18014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:27:53.912890   18014 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:53.912915   18014 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:53.912994   18014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:27:53.913037   18014 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:53.913048   18014 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:53.913692   18014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:27:54.089972   18014 main.go:141] libmachine: Creating SSH key...
	I0729 04:27:54.183187   18014 main.go:141] libmachine: Creating Disk image...
	I0729 04:27:54.183194   18014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:27:54.183456   18014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2
	I0729 04:27:54.192684   18014 main.go:141] libmachine: STDOUT: 
	I0729 04:27:54.192700   18014 main.go:141] libmachine: STDERR: 
	I0729 04:27:54.192742   18014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2 +20000M
	I0729 04:27:54.200616   18014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:27:54.200634   18014 main.go:141] libmachine: STDERR: 
	I0729 04:27:54.200650   18014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2
	I0729 04:27:54.200656   18014 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:27:54.200665   18014 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:27:54.200695   18014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:1b:43:e9:e4:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2
	I0729 04:27:54.202377   18014 main.go:141] libmachine: STDOUT: 
	I0729 04:27:54.202390   18014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:27:54.202405   18014 client.go:171] duration metric: took 289.703083ms to LocalClient.Create
	I0729 04:27:56.204551   18014 start.go:128] duration metric: took 2.35321025s to createHost
	I0729 04:27:56.204611   18014 start.go:83] releasing machines lock for "docker-flags-060000", held for 2.353772417s
	W0729 04:27:56.204672   18014 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:56.214245   18014 out.go:177] * Deleting "docker-flags-060000" in qemu2 ...
	W0729 04:27:56.245336   18014 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:56.245368   18014 start.go:729] Will try again in 5 seconds ...
	I0729 04:28:01.247402   18014 start.go:360] acquireMachinesLock for docker-flags-060000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:28:01.247719   18014 start.go:364] duration metric: took 236.125µs to acquireMachinesLock for "docker-flags-060000"
	I0729 04:28:01.247805   18014 start.go:93] Provisioning new machine with config: &{Name:docker-flags-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:28:01.247975   18014 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:28:01.256699   18014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:28:01.290168   18014 start.go:159] libmachine.API.Create for "docker-flags-060000" (driver="qemu2")
	I0729 04:28:01.290221   18014 client.go:168] LocalClient.Create starting
	I0729 04:28:01.290299   18014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:28:01.290339   18014 main.go:141] libmachine: Decoding PEM data...
	I0729 04:28:01.290351   18014 main.go:141] libmachine: Parsing certificate...
	I0729 04:28:01.290396   18014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:28:01.290421   18014 main.go:141] libmachine: Decoding PEM data...
	I0729 04:28:01.290432   18014 main.go:141] libmachine: Parsing certificate...
	I0729 04:28:01.291011   18014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:28:01.604200   18014 main.go:141] libmachine: Creating SSH key...
	I0729 04:28:01.680961   18014 main.go:141] libmachine: Creating Disk image...
	I0729 04:28:01.680971   18014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:28:01.681165   18014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2
	I0729 04:28:01.702827   18014 main.go:141] libmachine: STDOUT: 
	I0729 04:28:01.702849   18014 main.go:141] libmachine: STDERR: 
	I0729 04:28:01.702907   18014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2 +20000M
	I0729 04:28:01.715013   18014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:28:01.715033   18014 main.go:141] libmachine: STDERR: 
	I0729 04:28:01.715049   18014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2
	I0729 04:28:01.715055   18014 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:28:01.715068   18014 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:28:01.715101   18014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:d3:c5:63:37:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/docker-flags-060000/disk.qcow2
	I0729 04:28:01.716878   18014 main.go:141] libmachine: STDOUT: 
	I0729 04:28:01.716894   18014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:28:01.716907   18014 client.go:171] duration metric: took 426.689167ms to LocalClient.Create
	I0729 04:28:03.719171   18014 start.go:128] duration metric: took 2.471211292s to createHost
	I0729 04:28:03.719264   18014 start.go:83] releasing machines lock for "docker-flags-060000", held for 2.471587542s
	W0729 04:28:03.719583   18014 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-060000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:28:03.734237   18014 out.go:177] 
	W0729 04:28:03.739307   18014 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:28:03.739366   18014 out.go:239] * 
	* 
	W0729 04:28:03.742929   18014 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:28:03.752131   18014 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-060000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-060000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-060000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (100.138834ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-060000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-060000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-060000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-060000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-060000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-060000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-060000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-060000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-060000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (101.67875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-060000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-060000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-060000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-060000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-060000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-060000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-29 04:28:03.965353 -0700 PDT m=+715.274691043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-060000 -n docker-flags-060000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-060000 -n docker-flags-060000: exit status 7 (35.664292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-060000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-060000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-060000
--- FAIL: TestDockerFlags (12.43s)

                                                
                                    
x
+
TestForceSystemdFlag (12.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-006000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-006000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.335277s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-006000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-006000" primary control-plane node in "force-systemd-flag-006000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-006000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:27:48.925045   17994 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:27:48.925210   17994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:27:48.925214   17994 out.go:304] Setting ErrFile to fd 2...
	I0729 04:27:48.925216   17994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:27:48.925330   17994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:27:48.926384   17994 out.go:298] Setting JSON to false
	I0729 04:27:48.944201   17994 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8837,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:27:48.944287   17994 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:27:48.970460   17994 out.go:177] * [force-systemd-flag-006000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:27:48.982511   17994 notify.go:220] Checking for updates...
	I0729 04:27:48.986392   17994 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:27:48.998427   17994 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:27:49.006415   17994 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:27:49.014455   17994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:27:49.021476   17994 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:27:49.027450   17994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:27:49.031917   17994 config.go:182] Loaded profile config "force-systemd-env-914000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:27:49.032011   17994 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:27:49.032089   17994 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:27:49.045522   17994 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:27:49.049456   17994 start.go:297] selected driver: qemu2
	I0729 04:27:49.049464   17994 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:27:49.049472   17994 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:27:49.052403   17994 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:27:49.061521   17994 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:27:49.065587   17994 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:27:49.065611   17994 cni.go:84] Creating CNI manager for ""
	I0729 04:27:49.065624   17994 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:27:49.065630   17994 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:27:49.065687   17994 start.go:340] cluster config:
	{Name:force-systemd-flag-006000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:27:49.070954   17994 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:27:49.084470   17994 out.go:177] * Starting "force-systemd-flag-006000" primary control-plane node in "force-systemd-flag-006000" cluster
	I0729 04:27:49.092517   17994 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:27:49.092545   17994 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:27:49.092569   17994 cache.go:56] Caching tarball of preloaded images
	I0729 04:27:49.092680   17994 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:27:49.092689   17994 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:27:49.092777   17994 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/force-systemd-flag-006000/config.json ...
	I0729 04:27:49.092792   17994 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/force-systemd-flag-006000/config.json: {Name:mk6005a3a923d97cae896bfa3271b5fb83e3d89f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:27:49.093207   17994 start.go:360] acquireMachinesLock for force-systemd-flag-006000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:27:51.353918   17994 start.go:364] duration metric: took 2.260735625s to acquireMachinesLock for "force-systemd-flag-006000"
	I0729 04:27:51.354132   17994 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:27:51.354321   17994 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:27:51.363713   17994 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:27:51.412224   17994 start.go:159] libmachine.API.Create for "force-systemd-flag-006000" (driver="qemu2")
	I0729 04:27:51.412278   17994 client.go:168] LocalClient.Create starting
	I0729 04:27:51.412372   17994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:27:51.412413   17994 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:51.412437   17994 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:51.412501   17994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:27:51.412531   17994 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:51.412545   17994 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:51.413175   17994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:27:51.748116   17994 main.go:141] libmachine: Creating SSH key...
	I0729 04:27:51.825976   17994 main.go:141] libmachine: Creating Disk image...
	I0729 04:27:51.825983   17994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:27:51.826155   17994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2
	I0729 04:27:51.838180   17994 main.go:141] libmachine: STDOUT: 
	I0729 04:27:51.838201   17994 main.go:141] libmachine: STDERR: 
	I0729 04:27:51.838266   17994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2 +20000M
	I0729 04:27:51.846657   17994 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:27:51.846672   17994 main.go:141] libmachine: STDERR: 
	I0729 04:27:51.846690   17994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2
	I0729 04:27:51.846697   17994 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:27:51.846712   17994 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:27:51.846736   17994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:8e:e4:28:b0:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2
	I0729 04:27:51.848422   17994 main.go:141] libmachine: STDOUT: 
	I0729 04:27:51.848440   17994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:27:51.848458   17994 client.go:171] duration metric: took 436.1845ms to LocalClient.Create
	I0729 04:27:53.850588   17994 start.go:128] duration metric: took 2.496295083s to createHost
	I0729 04:27:53.850638   17994 start.go:83] releasing machines lock for "force-systemd-flag-006000", held for 2.496714792s
	W0729 04:27:53.850702   17994 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:53.870037   17994 out.go:177] * Deleting "force-systemd-flag-006000" in qemu2 ...
	W0729 04:27:53.891749   17994 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:53.891777   17994 start.go:729] Will try again in 5 seconds ...
	I0729 04:27:58.893910   17994 start.go:360] acquireMachinesLock for force-systemd-flag-006000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:27:58.894404   17994 start.go:364] duration metric: took 384.833µs to acquireMachinesLock for "force-systemd-flag-006000"
	I0729 04:27:58.894552   17994 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-006000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-006000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:27:58.894749   17994 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:27:58.909518   17994 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:27:58.961162   17994 start.go:159] libmachine.API.Create for "force-systemd-flag-006000" (driver="qemu2")
	I0729 04:27:58.961211   17994 client.go:168] LocalClient.Create starting
	I0729 04:27:58.961316   17994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:27:58.961381   17994 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:58.961399   17994 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:58.961457   17994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:27:58.961501   17994 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:58.961515   17994 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:58.962086   17994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:27:59.119280   17994 main.go:141] libmachine: Creating SSH key...
	I0729 04:27:59.162111   17994 main.go:141] libmachine: Creating Disk image...
	I0729 04:27:59.162116   17994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:27:59.162312   17994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2
	I0729 04:27:59.171760   17994 main.go:141] libmachine: STDOUT: 
	I0729 04:27:59.171780   17994 main.go:141] libmachine: STDERR: 
	I0729 04:27:59.171829   17994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2 +20000M
	I0729 04:27:59.179615   17994 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:27:59.179629   17994 main.go:141] libmachine: STDERR: 
	I0729 04:27:59.179644   17994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2
	I0729 04:27:59.179649   17994 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:27:59.179666   17994 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:27:59.179694   17994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:1d:8f:fb:94:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-flag-006000/disk.qcow2
	I0729 04:27:59.181283   17994 main.go:141] libmachine: STDOUT: 
	I0729 04:27:59.181296   17994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:27:59.181309   17994 client.go:171] duration metric: took 220.098125ms to LocalClient.Create
	I0729 04:28:01.183492   17994 start.go:128] duration metric: took 2.288742625s to createHost
	I0729 04:28:01.183564   17994 start.go:83] releasing machines lock for "force-systemd-flag-006000", held for 2.289191417s
	W0729 04:28:01.183998   17994 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-006000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-006000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:28:01.201808   17994 out.go:177] 
	W0729 04:28:01.205687   17994 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:28:01.205728   17994 out.go:239] * 
	* 
	W0729 04:28:01.208910   17994 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:28:01.218662   17994 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-006000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-006000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-006000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (89.655292ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-006000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-006000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-006000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-29 04:28:01.325906 -0700 PDT m=+712.635178793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-006000 -n force-systemd-flag-006000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-006000 -n force-systemd-flag-006000: exit status 7 (37.214417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-006000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-006000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-006000
--- FAIL: TestForceSystemdFlag (12.66s)

                                                
                                    
x
+
TestForceSystemdEnv (10.25s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-914000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-914000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.915439042s)

                                                
                                                
-- stdout --
	* [force-systemd-env-914000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-914000" primary control-plane node in "force-systemd-env-914000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-914000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:27:41.505592   17956 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:27:41.505713   17956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:27:41.505716   17956 out.go:304] Setting ErrFile to fd 2...
	I0729 04:27:41.505719   17956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:27:41.505833   17956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:27:41.506907   17956 out.go:298] Setting JSON to false
	I0729 04:27:41.523170   17956 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8830,"bootTime":1722243631,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:27:41.523235   17956 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:27:41.529157   17956 out.go:177] * [force-systemd-env-914000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:27:41.536170   17956 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:27:41.536223   17956 notify.go:220] Checking for updates...
	I0729 04:27:41.545112   17956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:27:41.548113   17956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:27:41.552002   17956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:27:41.555103   17956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:27:41.558080   17956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0729 04:27:41.561377   17956 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:27:41.561433   17956 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:27:41.565086   17956 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:27:41.572109   17956 start.go:297] selected driver: qemu2
	I0729 04:27:41.572118   17956 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:27:41.572124   17956 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:27:41.574554   17956 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:27:41.579116   17956 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:27:41.582212   17956 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:27:41.582235   17956 cni.go:84] Creating CNI manager for ""
	I0729 04:27:41.582255   17956 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:27:41.582263   17956 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:27:41.582286   17956 start.go:340] cluster config:
	{Name:force-systemd-env-914000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-914000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:27:41.586091   17956 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:27:41.595143   17956 out.go:177] * Starting "force-systemd-env-914000" primary control-plane node in "force-systemd-env-914000" cluster
	I0729 04:27:41.599060   17956 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:27:41.599084   17956 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:27:41.599096   17956 cache.go:56] Caching tarball of preloaded images
	I0729 04:27:41.599160   17956 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:27:41.599166   17956 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:27:41.599250   17956 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/force-systemd-env-914000/config.json ...
	I0729 04:27:41.599263   17956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/force-systemd-env-914000/config.json: {Name:mk2effd7946fce4a3928bbeb69dc89eda9e767e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:27:41.599486   17956 start.go:360] acquireMachinesLock for force-systemd-env-914000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:27:41.599522   17956 start.go:364] duration metric: took 29.459µs to acquireMachinesLock for "force-systemd-env-914000"
	I0729 04:27:41.599535   17956 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-914000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-914000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:27:41.599564   17956 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:27:41.606043   17956 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:27:41.624509   17956 start.go:159] libmachine.API.Create for "force-systemd-env-914000" (driver="qemu2")
	I0729 04:27:41.624536   17956 client.go:168] LocalClient.Create starting
	I0729 04:27:41.624620   17956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:27:41.624652   17956 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:41.624662   17956 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:41.624705   17956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:27:41.624729   17956 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:41.624737   17956 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:41.625094   17956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:27:41.773578   17956 main.go:141] libmachine: Creating SSH key...
	I0729 04:27:41.840687   17956 main.go:141] libmachine: Creating Disk image...
	I0729 04:27:41.840692   17956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:27:41.840905   17956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2
	I0729 04:27:41.850075   17956 main.go:141] libmachine: STDOUT: 
	I0729 04:27:41.850092   17956 main.go:141] libmachine: STDERR: 
	I0729 04:27:41.850135   17956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2 +20000M
	I0729 04:27:41.857925   17956 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:27:41.857941   17956 main.go:141] libmachine: STDERR: 
	I0729 04:27:41.857956   17956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2
	I0729 04:27:41.857960   17956 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:27:41.857976   17956 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:27:41.858015   17956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:4e:cf:c9:36:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2
	I0729 04:27:41.859624   17956 main.go:141] libmachine: STDOUT: 
	I0729 04:27:41.859639   17956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:27:41.859657   17956 client.go:171] duration metric: took 235.121459ms to LocalClient.Create
	I0729 04:27:43.861786   17956 start.go:128] duration metric: took 2.262254667s to createHost
	I0729 04:27:43.861850   17956 start.go:83] releasing machines lock for "force-systemd-env-914000", held for 2.262374167s
	W0729 04:27:43.861907   17956 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:43.878880   17956 out.go:177] * Deleting "force-systemd-env-914000" in qemu2 ...
	W0729 04:27:43.907444   17956 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:43.907473   17956 start.go:729] Will try again in 5 seconds ...
	I0729 04:27:48.909113   17956 start.go:360] acquireMachinesLock for force-systemd-env-914000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:27:48.909194   17956 start.go:364] duration metric: took 59.792µs to acquireMachinesLock for "force-systemd-env-914000"
	I0729 04:27:48.909205   17956 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-914000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-914000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:27:48.909269   17956 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:27:48.914476   17956 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0729 04:27:48.930269   17956 start.go:159] libmachine.API.Create for "force-systemd-env-914000" (driver="qemu2")
	I0729 04:27:48.930303   17956 client.go:168] LocalClient.Create starting
	I0729 04:27:48.930362   17956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:27:48.930393   17956 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:48.930403   17956 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:48.930440   17956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:27:48.930463   17956 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:48.930469   17956 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:48.930719   17956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:27:49.216632   17956 main.go:141] libmachine: Creating SSH key...
	I0729 04:27:49.332702   17956 main.go:141] libmachine: Creating Disk image...
	I0729 04:27:49.332708   17956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:27:49.332908   17956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2
	I0729 04:27:49.341954   17956 main.go:141] libmachine: STDOUT: 
	I0729 04:27:49.341974   17956 main.go:141] libmachine: STDERR: 
	I0729 04:27:49.342023   17956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2 +20000M
	I0729 04:27:49.349817   17956 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:27:49.349841   17956 main.go:141] libmachine: STDERR: 
	I0729 04:27:49.349858   17956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2
	I0729 04:27:49.349862   17956 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:27:49.349870   17956 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:27:49.349899   17956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:c7:78:54:de:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/force-systemd-env-914000/disk.qcow2
	I0729 04:27:49.351539   17956 main.go:141] libmachine: STDOUT: 
	I0729 04:27:49.351552   17956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:27:49.351562   17956 client.go:171] duration metric: took 421.265917ms to LocalClient.Create
	I0729 04:27:51.353678   17956 start.go:128] duration metric: took 2.444451375s to createHost
	I0729 04:27:51.353760   17956 start.go:83] releasing machines lock for "force-systemd-env-914000", held for 2.444616834s
	W0729 04:27:51.354096   17956 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-914000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-914000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:51.367662   17956 out.go:177] 
	W0729 04:27:51.371730   17956 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:27:51.371791   17956 out.go:239] * 
	* 
	W0729 04:27:51.373731   17956 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:27:51.382618   17956 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-914000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-914000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-914000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (105.387375ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-914000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-914000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-914000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-29 04:27:51.49994 -0700 PDT m=+702.808972210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-914000 -n force-systemd-env-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-914000 -n force-systemd-env-914000: exit status 7 (38.433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-914000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-914000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-914000
--- FAIL: TestForceSystemdEnv (10.25s)

                                                
                                    
x
+
TestErrorSpam/setup (9.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-129000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-129000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 --driver=qemu2 : exit status 80 (9.778263167s)

                                                
                                                
-- stdout --
	* [nospam-129000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-129000" primary control-plane node in "nospam-129000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-129000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-129000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-129000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-129000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-129000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19341
- KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-129000" primary control-plane node in "nospam-129000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-129000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-129000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.78s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-356000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-356000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.857083791s)

                                                
                                                
-- stdout --
	* [functional-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-356000" primary control-plane node in "functional-356000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-356000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52950 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52950 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52950 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-356000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-356000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19341
- KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-356000" primary control-plane node in "functional-356000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-356000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52950 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52950 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52950 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-356000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (69.777666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.93s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-356000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-356000 --alsologtostderr -v=8: exit status 80 (5.202479583s)

                                                
                                                
-- stdout --
	* [functional-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-356000" primary control-plane node in "functional-356000" cluster
	* Restarting existing qemu2 VM for "functional-356000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-356000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:17:14.902769   16232 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:17:14.902889   16232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:17:14.902893   16232 out.go:304] Setting ErrFile to fd 2...
	I0729 04:17:14.902895   16232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:17:14.903019   16232 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:17:14.904030   16232 out.go:298] Setting JSON to false
	I0729 04:17:14.920179   16232 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8203,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:17:14.920245   16232 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:17:14.925830   16232 out.go:177] * [functional-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:17:14.932828   16232 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:17:14.932887   16232 notify.go:220] Checking for updates...
	I0729 04:17:14.943339   16232 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:17:14.954360   16232 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:17:14.958797   16232 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:17:14.961942   16232 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:17:14.964816   16232 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:17:14.968133   16232 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:17:14.968192   16232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:17:14.972819   16232 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:17:14.979755   16232 start.go:297] selected driver: qemu2
	I0729 04:17:14.979766   16232 start.go:901] validating driver "qemu2" against &{Name:functional-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:17:14.979835   16232 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:17:14.982198   16232 cni.go:84] Creating CNI manager for ""
	I0729 04:17:14.982213   16232 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:17:14.982267   16232 start.go:340] cluster config:
	{Name:functional-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-356000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:17:14.985906   16232 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:17:14.993725   16232 out.go:177] * Starting "functional-356000" primary control-plane node in "functional-356000" cluster
	I0729 04:17:14.997801   16232 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:17:14.997817   16232 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:17:14.997828   16232 cache.go:56] Caching tarball of preloaded images
	I0729 04:17:14.997894   16232 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:17:14.997900   16232 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:17:14.997974   16232 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/functional-356000/config.json ...
	I0729 04:17:14.998513   16232 start.go:360] acquireMachinesLock for functional-356000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:17:14.998543   16232 start.go:364] duration metric: took 23.584µs to acquireMachinesLock for "functional-356000"
	I0729 04:17:14.998560   16232 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:17:14.998564   16232 fix.go:54] fixHost starting: 
	I0729 04:17:14.998698   16232 fix.go:112] recreateIfNeeded on functional-356000: state=Stopped err=<nil>
	W0729 04:17:14.998708   16232 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:17:15.006786   16232 out.go:177] * Restarting existing qemu2 VM for "functional-356000" ...
	I0729 04:17:15.010822   16232 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:17:15.010862   16232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:34:1c:36:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/disk.qcow2
	I0729 04:17:15.013014   16232 main.go:141] libmachine: STDOUT: 
	I0729 04:17:15.013037   16232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:17:15.013068   16232 fix.go:56] duration metric: took 14.504334ms for fixHost
	I0729 04:17:15.013072   16232 start.go:83] releasing machines lock for "functional-356000", held for 14.525583ms
	W0729 04:17:15.013079   16232 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:17:15.013113   16232 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:17:15.013118   16232 start.go:729] Will try again in 5 seconds ...
	I0729 04:17:20.015232   16232 start.go:360] acquireMachinesLock for functional-356000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:17:20.015616   16232 start.go:364] duration metric: took 283.333µs to acquireMachinesLock for "functional-356000"
	I0729 04:17:20.015753   16232 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:17:20.015773   16232 fix.go:54] fixHost starting: 
	I0729 04:17:20.016503   16232 fix.go:112] recreateIfNeeded on functional-356000: state=Stopped err=<nil>
	W0729 04:17:20.016530   16232 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:17:20.020948   16232 out.go:177] * Restarting existing qemu2 VM for "functional-356000" ...
	I0729 04:17:20.029978   16232 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:17:20.030229   16232 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:34:1c:36:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/disk.qcow2
	I0729 04:17:20.039769   16232 main.go:141] libmachine: STDOUT: 
	I0729 04:17:20.039834   16232 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:17:20.039918   16232 fix.go:56] duration metric: took 24.147708ms for fixHost
	I0729 04:17:20.039935   16232 start.go:83] releasing machines lock for "functional-356000", held for 24.295875ms
	W0729 04:17:20.040114   16232 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-356000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-356000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:17:20.046826   16232 out.go:177] 
	W0729 04:17:20.050977   16232 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:17:20.051008   16232 out.go:239] * 
	* 
	W0729 04:17:20.053634   16232 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:17:20.061992   16232 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-356000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.204147833s for "functional-356000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (69.202292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (32.380208ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-356000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (30.250833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-356000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-356000 get po -A: exit status 1 (26.39725ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-356000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-356000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-356000\n"*: args "kubectl --context functional-356000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-356000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (30.579834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh sudo crictl images: exit status 83 (54.795416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-356000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.887209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-356000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.887084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.869167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-356000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 kubectl -- --context functional-356000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 kubectl -- --context functional-356000 get pods: exit status 1 (697.739208ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-356000
	* no server found for cluster "functional-356000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-356000 kubectl -- --context functional-356000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (30.437125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.73s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-356000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-356000 get pods: exit status 1 (937.226958ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-356000
	* no server found for cluster "functional-356000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-356000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (29.951167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-356000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-356000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.188942291s)

                                                
                                                
-- stdout --
	* [functional-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-356000" primary control-plane node in "functional-356000" cluster
	* Restarting existing qemu2 VM for "functional-356000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-356000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-356000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-356000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.189450834s for "functional-356000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (71.321959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-356000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-356000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.727459ms)

                                                
                                                
** stderr ** 
	error: context "functional-356000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-356000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (29.876542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 logs: exit status 83 (77.531625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | -p download-only-753000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| delete  | -p download-only-753000                                                  | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| start   | -o=json --download-only                                                  | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | -p download-only-386000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| start   | -o=json --download-only                                                  | download-only-771000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | -p download-only-771000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| delete  | -p download-only-771000                                                  | download-only-771000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| delete  | -p download-only-753000                                                  | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| delete  | -p download-only-771000                                                  | download-only-771000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| start   | --download-only -p                                                       | binary-mirror-393000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | binary-mirror-393000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:52921                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-393000                                                  | binary-mirror-393000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| addons  | enable dashboard -p                                                      | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | addons-621000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | addons-621000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-621000 --wait=true                                             | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-621000                                                         | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| start   | -p nospam-129000 -n=1 --memory=2250 --wait=false                         | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:17 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-129000                                                         | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	| start   | -p functional-356000                                                     | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-356000                                                     | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	|         | minikube-local-cache-test:functional-356000                              |                      |         |         |                     |                     |
	| cache   | functional-356000 cache delete                                           | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	|         | minikube-local-cache-test:functional-356000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	| ssh     | functional-356000 ssh sudo                                               | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-356000                                                        | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-356000 ssh                                                    | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-356000 cache reload                                           | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	| ssh     | functional-356000 ssh                                                    | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-356000 kubectl --                                             | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
	|         | --context functional-356000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-356000                                                     | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 04:17:25
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 04:17:25.227518   16326 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:17:25.227659   16326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:17:25.227661   16326 out.go:304] Setting ErrFile to fd 2...
	I0729 04:17:25.227662   16326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:17:25.227786   16326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:17:25.228768   16326 out.go:298] Setting JSON to false
	I0729 04:17:25.244843   16326 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8214,"bootTime":1722243631,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:17:25.244908   16326 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:17:25.252828   16326 out.go:177] * [functional-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:17:25.260736   16326 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:17:25.260780   16326 notify.go:220] Checking for updates...
	I0729 04:17:25.270579   16326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:17:25.274698   16326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:17:25.277588   16326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:17:25.280641   16326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:17:25.283660   16326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:17:25.285331   16326 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:17:25.285380   16326 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:17:25.288832   16326 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:17:25.294740   16326 start.go:297] selected driver: qemu2
	I0729 04:17:25.294746   16326 start.go:901] validating driver "qemu2" against &{Name:functional-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:17:25.294804   16326 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:17:25.297245   16326 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:17:25.297278   16326 cni.go:84] Creating CNI manager for ""
	I0729 04:17:25.297285   16326 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:17:25.297334   16326 start.go:340] cluster config:
	{Name:functional-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-356000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:17:25.300930   16326 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:17:25.305528   16326 out.go:177] * Starting "functional-356000" primary control-plane node in "functional-356000" cluster
	I0729 04:17:25.311661   16326 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:17:25.311673   16326 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:17:25.311680   16326 cache.go:56] Caching tarball of preloaded images
	I0729 04:17:25.311731   16326 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:17:25.311735   16326 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:17:25.311796   16326 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/functional-356000/config.json ...
	I0729 04:17:25.312077   16326 start.go:360] acquireMachinesLock for functional-356000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:17:25.312110   16326 start.go:364] duration metric: took 28.709µs to acquireMachinesLock for "functional-356000"
	I0729 04:17:25.312118   16326 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:17:25.312122   16326 fix.go:54] fixHost starting: 
	I0729 04:17:25.312236   16326 fix.go:112] recreateIfNeeded on functional-356000: state=Stopped err=<nil>
	W0729 04:17:25.312242   16326 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:17:25.321670   16326 out.go:177] * Restarting existing qemu2 VM for "functional-356000" ...
	I0729 04:17:25.327684   16326 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:17:25.327728   16326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:34:1c:36:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/disk.qcow2
	I0729 04:17:25.329939   16326 main.go:141] libmachine: STDOUT: 
	I0729 04:17:25.329954   16326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:17:25.329979   16326 fix.go:56] duration metric: took 17.858542ms for fixHost
	I0729 04:17:25.329982   16326 start.go:83] releasing machines lock for "functional-356000", held for 17.869792ms
	W0729 04:17:25.329987   16326 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:17:25.330027   16326 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:17:25.330032   16326 start.go:729] Will try again in 5 seconds ...
	I0729 04:17:30.332132   16326 start.go:360] acquireMachinesLock for functional-356000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:17:30.332537   16326 start.go:364] duration metric: took 344.25µs to acquireMachinesLock for "functional-356000"
	I0729 04:17:30.332698   16326 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:17:30.332709   16326 fix.go:54] fixHost starting: 
	I0729 04:17:30.333387   16326 fix.go:112] recreateIfNeeded on functional-356000: state=Stopped err=<nil>
	W0729 04:17:30.333407   16326 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:17:30.341771   16326 out.go:177] * Restarting existing qemu2 VM for "functional-356000" ...
	I0729 04:17:30.345723   16326 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:17:30.345932   16326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:34:1c:36:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/disk.qcow2
	I0729 04:17:30.354794   16326 main.go:141] libmachine: STDOUT: 
	I0729 04:17:30.354852   16326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:17:30.354958   16326 fix.go:56] duration metric: took 22.249334ms for fixHost
	I0729 04:17:30.354971   16326 start.go:83] releasing machines lock for "functional-356000", held for 22.4215ms
	W0729 04:17:30.355122   16326 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-356000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:17:30.361730   16326 out.go:177] 
	W0729 04:17:30.365764   16326 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:17:30.365789   16326 out.go:239] * 
	W0729 04:17:30.368452   16326 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:17:30.375738   16326 out.go:177] 
	
	
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-356000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | -p download-only-753000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-753000                                                  | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| start   | -o=json --download-only                                                  | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | -p download-only-386000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| start   | -o=json --download-only                                                  | download-only-771000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | -p download-only-771000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-771000                                                  | download-only-771000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-753000                                                  | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-771000                                                  | download-only-771000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| start   | --download-only -p                                                       | binary-mirror-393000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | binary-mirror-393000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52921                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-393000                                                  | binary-mirror-393000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| addons  | enable dashboard -p                                                      | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | addons-621000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | addons-621000                                                            |                      |         |         |                     |                     |
| start   | -p addons-621000 --wait=true                                             | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-621000                                                         | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| start   | -p nospam-129000 -n=1 --memory=2250 --wait=false                         | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-129000                                                         | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
| start   | -p functional-356000                                                     | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-356000                                                     | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | minikube-local-cache-test:functional-356000                              |                      |         |         |                     |                     |
| cache   | functional-356000 cache delete                                           | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | minikube-local-cache-test:functional-356000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
| ssh     | functional-356000 ssh sudo                                               | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-356000                                                        | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-356000 ssh                                                    | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-356000 cache reload                                           | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
| ssh     | functional-356000 ssh                                                    | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-356000 kubectl --                                             | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | --context functional-356000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-356000                                                     | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/29 04:17:25
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0729 04:17:25.227518   16326 out.go:291] Setting OutFile to fd 1 ...
I0729 04:17:25.227659   16326 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:17:25.227661   16326 out.go:304] Setting ErrFile to fd 2...
I0729 04:17:25.227662   16326 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:17:25.227786   16326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
I0729 04:17:25.228768   16326 out.go:298] Setting JSON to false
I0729 04:17:25.244843   16326 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8214,"bootTime":1722243631,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0729 04:17:25.244908   16326 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0729 04:17:25.252828   16326 out.go:177] * [functional-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0729 04:17:25.260736   16326 out.go:177]   - MINIKUBE_LOCATION=19341
I0729 04:17:25.260780   16326 notify.go:220] Checking for updates...
I0729 04:17:25.270579   16326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
I0729 04:17:25.274698   16326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0729 04:17:25.277588   16326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0729 04:17:25.280641   16326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
I0729 04:17:25.283660   16326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0729 04:17:25.285331   16326 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:17:25.285380   16326 driver.go:392] Setting default libvirt URI to qemu:///system
I0729 04:17:25.288832   16326 out.go:177] * Using the qemu2 driver based on existing profile
I0729 04:17:25.294740   16326 start.go:297] selected driver: qemu2
I0729 04:17:25.294746   16326 start.go:901] validating driver "qemu2" against &{Name:functional-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 04:17:25.294804   16326 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0729 04:17:25.297245   16326 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0729 04:17:25.297278   16326 cni.go:84] Creating CNI manager for ""
I0729 04:17:25.297285   16326 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0729 04:17:25.297334   16326 start.go:340] cluster config:
{Name:functional-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-356000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 04:17:25.300930   16326 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0729 04:17:25.305528   16326 out.go:177] * Starting "functional-356000" primary control-plane node in "functional-356000" cluster
I0729 04:17:25.311661   16326 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 04:17:25.311673   16326 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 04:17:25.311680   16326 cache.go:56] Caching tarball of preloaded images
I0729 04:17:25.311731   16326 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 04:17:25.311735   16326 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 04:17:25.311796   16326 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/functional-356000/config.json ...
I0729 04:17:25.312077   16326 start.go:360] acquireMachinesLock for functional-356000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 04:17:25.312110   16326 start.go:364] duration metric: took 28.709µs to acquireMachinesLock for "functional-356000"
I0729 04:17:25.312118   16326 start.go:96] Skipping create...Using existing machine configuration
I0729 04:17:25.312122   16326 fix.go:54] fixHost starting: 
I0729 04:17:25.312236   16326 fix.go:112] recreateIfNeeded on functional-356000: state=Stopped err=<nil>
W0729 04:17:25.312242   16326 fix.go:138] unexpected machine state, will restart: <nil>
I0729 04:17:25.321670   16326 out.go:177] * Restarting existing qemu2 VM for "functional-356000" ...
I0729 04:17:25.327684   16326 qemu.go:418] Using hvf for hardware acceleration
I0729 04:17:25.327728   16326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:34:1c:36:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/disk.qcow2
I0729 04:17:25.329939   16326 main.go:141] libmachine: STDOUT: 
I0729 04:17:25.329954   16326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 04:17:25.329979   16326 fix.go:56] duration metric: took 17.858542ms for fixHost
I0729 04:17:25.329982   16326 start.go:83] releasing machines lock for "functional-356000", held for 17.869792ms
W0729 04:17:25.329987   16326 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 04:17:25.330027   16326 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 04:17:25.330032   16326 start.go:729] Will try again in 5 seconds ...
I0729 04:17:30.332132   16326 start.go:360] acquireMachinesLock for functional-356000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 04:17:30.332537   16326 start.go:364] duration metric: took 344.25µs to acquireMachinesLock for "functional-356000"
I0729 04:17:30.332698   16326 start.go:96] Skipping create...Using existing machine configuration
I0729 04:17:30.332709   16326 fix.go:54] fixHost starting: 
I0729 04:17:30.333387   16326 fix.go:112] recreateIfNeeded on functional-356000: state=Stopped err=<nil>
W0729 04:17:30.333407   16326 fix.go:138] unexpected machine state, will restart: <nil>
I0729 04:17:30.341771   16326 out.go:177] * Restarting existing qemu2 VM for "functional-356000" ...
I0729 04:17:30.345723   16326 qemu.go:418] Using hvf for hardware acceleration
I0729 04:17:30.345932   16326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:34:1c:36:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/disk.qcow2
I0729 04:17:30.354794   16326 main.go:141] libmachine: STDOUT: 
I0729 04:17:30.354852   16326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 04:17:30.354958   16326 fix.go:56] duration metric: took 22.249334ms for fixHost
I0729 04:17:30.354971   16326 start.go:83] releasing machines lock for "functional-356000", held for 22.4215ms
W0729 04:17:30.355122   16326 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-356000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 04:17:30.361730   16326 out.go:177] 
W0729 04:17:30.365764   16326 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 04:17:30.365789   16326 out.go:239] * 
W0729 04:17:30.368452   16326 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 04:17:30.375738   16326 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd601162266/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | -p download-only-753000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-753000                                                  | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| start   | -o=json --download-only                                                  | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | -p download-only-386000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| start   | -o=json --download-only                                                  | download-only-771000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | -p download-only-771000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-771000                                                  | download-only-771000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-753000                                                  | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| delete  | -p download-only-771000                                                  | download-only-771000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| start   | --download-only -p                                                       | binary-mirror-393000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | binary-mirror-393000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52921                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-393000                                                  | binary-mirror-393000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| addons  | enable dashboard -p                                                      | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | addons-621000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | addons-621000                                                            |                      |         |         |                     |                     |
| start   | -p addons-621000 --wait=true                                             | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-621000                                                         | addons-621000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
| start   | -p nospam-129000 -n=1 --memory=2250 --wait=false                         | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-129000 --log_dir                                                  | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-129000                                                         | nospam-129000        | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
| start   | -p functional-356000                                                     | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-356000                                                     | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-356000 cache add                                              | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | minikube-local-cache-test:functional-356000                              |                      |         |         |                     |                     |
| cache   | functional-356000 cache delete                                           | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | minikube-local-cache-test:functional-356000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
| ssh     | functional-356000 ssh sudo                                               | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-356000                                                        | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-356000 ssh                                                    | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-356000 cache reload                                           | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
| ssh     | functional-356000 ssh                                                    | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT | 29 Jul 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-356000 kubectl --                                             | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | --context functional-356000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-356000                                                     | functional-356000    | jenkins | v1.33.1 | 29 Jul 24 04:17 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/29 04:17:25
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0729 04:17:25.227518   16326 out.go:291] Setting OutFile to fd 1 ...
I0729 04:17:25.227659   16326 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:17:25.227661   16326 out.go:304] Setting ErrFile to fd 2...
I0729 04:17:25.227662   16326 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:17:25.227786   16326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
I0729 04:17:25.228768   16326 out.go:298] Setting JSON to false
I0729 04:17:25.244843   16326 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8214,"bootTime":1722243631,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0729 04:17:25.244908   16326 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0729 04:17:25.252828   16326 out.go:177] * [functional-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0729 04:17:25.260736   16326 out.go:177]   - MINIKUBE_LOCATION=19341
I0729 04:17:25.260780   16326 notify.go:220] Checking for updates...
I0729 04:17:25.270579   16326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
I0729 04:17:25.274698   16326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0729 04:17:25.277588   16326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0729 04:17:25.280641   16326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
I0729 04:17:25.283660   16326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0729 04:17:25.285331   16326 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:17:25.285380   16326 driver.go:392] Setting default libvirt URI to qemu:///system
I0729 04:17:25.288832   16326 out.go:177] * Using the qemu2 driver based on existing profile
I0729 04:17:25.294740   16326 start.go:297] selected driver: qemu2
I0729 04:17:25.294746   16326 start.go:901] validating driver "qemu2" against &{Name:functional-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 04:17:25.294804   16326 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0729 04:17:25.297245   16326 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0729 04:17:25.297278   16326 cni.go:84] Creating CNI manager for ""
I0729 04:17:25.297285   16326 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0729 04:17:25.297334   16326 start.go:340] cluster config:
{Name:functional-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-356000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0729 04:17:25.300930   16326 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0729 04:17:25.305528   16326 out.go:177] * Starting "functional-356000" primary control-plane node in "functional-356000" cluster
I0729 04:17:25.311661   16326 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0729 04:17:25.311673   16326 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0729 04:17:25.311680   16326 cache.go:56] Caching tarball of preloaded images
I0729 04:17:25.311731   16326 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0729 04:17:25.311735   16326 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0729 04:17:25.311796   16326 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/functional-356000/config.json ...
I0729 04:17:25.312077   16326 start.go:360] acquireMachinesLock for functional-356000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 04:17:25.312110   16326 start.go:364] duration metric: took 28.709µs to acquireMachinesLock for "functional-356000"
I0729 04:17:25.312118   16326 start.go:96] Skipping create...Using existing machine configuration
I0729 04:17:25.312122   16326 fix.go:54] fixHost starting: 
I0729 04:17:25.312236   16326 fix.go:112] recreateIfNeeded on functional-356000: state=Stopped err=<nil>
W0729 04:17:25.312242   16326 fix.go:138] unexpected machine state, will restart: <nil>
I0729 04:17:25.321670   16326 out.go:177] * Restarting existing qemu2 VM for "functional-356000" ...
I0729 04:17:25.327684   16326 qemu.go:418] Using hvf for hardware acceleration
I0729 04:17:25.327728   16326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:34:1c:36:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/disk.qcow2
I0729 04:17:25.329939   16326 main.go:141] libmachine: STDOUT: 
I0729 04:17:25.329954   16326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 04:17:25.329979   16326 fix.go:56] duration metric: took 17.858542ms for fixHost
I0729 04:17:25.329982   16326 start.go:83] releasing machines lock for "functional-356000", held for 17.869792ms
W0729 04:17:25.329987   16326 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 04:17:25.330027   16326 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 04:17:25.330032   16326 start.go:729] Will try again in 5 seconds ...
I0729 04:17:30.332132   16326 start.go:360] acquireMachinesLock for functional-356000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0729 04:17:30.332537   16326 start.go:364] duration metric: took 344.25µs to acquireMachinesLock for "functional-356000"
I0729 04:17:30.332698   16326 start.go:96] Skipping create...Using existing machine configuration
I0729 04:17:30.332709   16326 fix.go:54] fixHost starting: 
I0729 04:17:30.333387   16326 fix.go:112] recreateIfNeeded on functional-356000: state=Stopped err=<nil>
W0729 04:17:30.333407   16326 fix.go:138] unexpected machine state, will restart: <nil>
I0729 04:17:30.341771   16326 out.go:177] * Restarting existing qemu2 VM for "functional-356000" ...
I0729 04:17:30.345723   16326 qemu.go:418] Using hvf for hardware acceleration
I0729 04:17:30.345932   16326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:bf:34:1c:36:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/functional-356000/disk.qcow2
I0729 04:17:30.354794   16326 main.go:141] libmachine: STDOUT: 
I0729 04:17:30.354852   16326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0729 04:17:30.354958   16326 fix.go:56] duration metric: took 22.249334ms for fixHost
I0729 04:17:30.354971   16326 start.go:83] releasing machines lock for "functional-356000", held for 22.4215ms
W0729 04:17:30.355122   16326 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-356000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0729 04:17:30.361730   16326 out.go:177] 
W0729 04:17:30.365764   16326 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0729 04:17:30.365789   16326 out.go:239] * 
W0729 04:17:30.368452   16326 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 04:17:30.375738   16326 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-356000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-356000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.474458ms)

                                                
                                                
** stderr ** 
	error: context "functional-356000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-356000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-356000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-356000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-356000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-356000 --alsologtostderr -v=1] stderr:
I0729 04:18:08.055822   16532 out.go:291] Setting OutFile to fd 1 ...
I0729 04:18:08.056214   16532 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:08.056218   16532 out.go:304] Setting ErrFile to fd 2...
I0729 04:18:08.056221   16532 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:08.056390   16532 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
I0729 04:18:08.056602   16532 mustload.go:65] Loading cluster: functional-356000
I0729 04:18:08.056788   16532 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:18:08.060711   16532 out.go:177] * The control-plane node functional-356000 host is not running: state=Stopped
I0729 04:18:08.064737   16532 out.go:177]   To start a cluster, run: "minikube start -p functional-356000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (41.960667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 status: exit status 7 (72.929541ms)

                                                
                                                
-- stdout --
	functional-356000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-356000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.8675ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-356000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 status -o json: exit status 7 (28.453791ms)

                                                
                                                
-- stdout --
	{"Name":"functional-356000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-356000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (29.921333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-356000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-356000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.195125ms)

                                                
                                                
** stderr ** 
	error: context "functional-356000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-356000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-356000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-356000 describe po hello-node-connect: exit status 1 (26.516541ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-356000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-356000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-356000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-356000 logs -l app=hello-node-connect: exit status 1 (26.735958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-356000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-356000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-356000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-356000 describe svc hello-node-connect: exit status 1 (26.480667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-356000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-356000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (30.119292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-356000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (33.302583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "echo hello": exit status 83 (51.731417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-356000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-356000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-356000\"\n"*. args "out/minikube-darwin-arm64 -p functional-356000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "cat /etc/hostname": exit status 83 (41.985458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-356000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-356000"- but got *"* The control-plane node functional-356000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-356000\"\n"*. args "out/minikube-darwin-arm64 -p functional-356000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (35.864542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (52.852875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-356000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh -n functional-356000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh -n functional-356000 "sudo cat /home/docker/cp-test.txt": exit status 83 (38.8725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-356000 ssh -n functional-356000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-356000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-356000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 cp functional-356000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3456130344/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 cp functional-356000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3456130344/001/cp-test.txt: exit status 83 (41.635917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-356000 cp functional-356000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3456130344/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh -n functional-356000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh -n functional-356000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.973041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-356000 ssh -n functional-356000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3456130344/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-356000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-356000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (48.34375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-356000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh -n functional-356000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh -n functional-356000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (58.105959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-356000 ssh -n functional-356000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-356000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-356000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15973/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /etc/test/nested/copy/15973/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /etc/test/nested/copy/15973/hosts": exit status 83 (43.19425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /etc/test/nested/copy/15973/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-356000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-356000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (29.387167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15973.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /etc/ssl/certs/15973.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /etc/ssl/certs/15973.pem": exit status 83 (47.84175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/15973.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-356000 ssh \"sudo cat /etc/ssl/certs/15973.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/15973.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-356000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-356000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15973.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /usr/share/ca-certificates/15973.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /usr/share/ca-certificates/15973.pem": exit status 83 (45.734791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/15973.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-356000 ssh \"sudo cat /usr/share/ca-certificates/15973.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/15973.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-356000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-356000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (42.591875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-356000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-356000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-356000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/159732.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /etc/ssl/certs/159732.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /etc/ssl/certs/159732.pem": exit status 83 (40.487209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/159732.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-356000 ssh \"sudo cat /etc/ssl/certs/159732.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/159732.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-356000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-356000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/159732.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /usr/share/ca-certificates/159732.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /usr/share/ca-certificates/159732.pem": exit status 83 (38.681042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/159732.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-356000 ssh \"sudo cat /usr/share/ca-certificates/159732.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/159732.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-356000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-356000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (46.804166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-356000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-356000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-356000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (29.649833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-356000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-356000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.986666ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-356000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-356000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-356000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-356000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-356000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-356000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-356000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-356000 -n functional-356000: exit status 7 (28.827042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-356000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "sudo systemctl is-active crio": exit status 83 (47.207417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-356000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-356000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-356000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-356000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0729 04:17:31.027009   16375 out.go:291] Setting OutFile to fd 1 ...
I0729 04:17:31.027172   16375 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:17:31.027176   16375 out.go:304] Setting ErrFile to fd 2...
I0729 04:17:31.027179   16375 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:17:31.027313   16375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
I0729 04:17:31.027585   16375 mustload.go:65] Loading cluster: functional-356000
I0729 04:17:31.027793   16375 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:17:31.030804   16375 out.go:177] * The control-plane node functional-356000 host is not running: state=Stopped
I0729 04:17:31.037855   16375 out.go:177]   To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
stdout: * The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-356000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-356000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-356000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-356000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 16374: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-356000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-356000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-356000": client config: context "functional-356000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (97.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-356000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-356000 get svc nginx-svc: exit status 1 (69.119333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-356000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-356000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (97.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-356000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-356000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.496541ms)

                                                
                                                
** stderr ** 
	error: context "functional-356000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-356000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 service list: exit status 83 (42.600542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-356000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-356000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-356000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 service list -o json: exit status 83 (39.783875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-356000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 service --namespace=default --https --url hello-node: exit status 83 (41.830875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-356000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 service hello-node --url --format={{.IP}}: exit status 83 (40.880875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-356000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-356000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-356000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 service hello-node --url: exit status 83 (40.846541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-356000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test.go:1565: failed to parse "* The control-plane node functional-356000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-356000\"": parse "* The control-plane node functional-356000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-356000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 version -o=json --components: exit status 83 (41.000625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-356000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-356000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-356000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-356000 image ls --format short --alsologtostderr:
I0729 04:18:12.967081   16658 out.go:291] Setting OutFile to fd 1 ...
I0729 04:18:12.967213   16658 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:12.967217   16658 out.go:304] Setting ErrFile to fd 2...
I0729 04:18:12.967219   16658 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:12.967346   16658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
I0729 04:18:12.968069   16658 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:18:12.968128   16658 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-356000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-356000 image ls --format table --alsologtostderr:
I0729 04:18:13.190047   16670 out.go:291] Setting OutFile to fd 1 ...
I0729 04:18:13.190200   16670 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:13.190203   16670 out.go:304] Setting ErrFile to fd 2...
I0729 04:18:13.190205   16670 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:13.190334   16670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
I0729 04:18:13.190767   16670 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:18:13.190825   16670 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-356000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-356000 image ls --format json --alsologtostderr:
I0729 04:18:13.153174   16668 out.go:291] Setting OutFile to fd 1 ...
I0729 04:18:13.153313   16668 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:13.153316   16668 out.go:304] Setting ErrFile to fd 2...
I0729 04:18:13.153319   16668 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:13.153441   16668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
I0729 04:18:13.153854   16668 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:18:13.153917   16668 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-356000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-356000 image ls --format yaml --alsologtostderr:
I0729 04:18:13.001648   16660 out.go:291] Setting OutFile to fd 1 ...
I0729 04:18:13.001776   16660 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:13.001780   16660 out.go:304] Setting ErrFile to fd 2...
I0729 04:18:13.001782   16660 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:13.001898   16660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
I0729 04:18:13.002269   16660 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:18:13.002326   16660 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh pgrep buildkitd: exit status 83 (42.912ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image build -t localhost/my-image:functional-356000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-356000 image build -t localhost/my-image:functional-356000 testdata/build --alsologtostderr:
I0729 04:18:13.078125   16664 out.go:291] Setting OutFile to fd 1 ...
I0729 04:18:13.078461   16664 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:13.078465   16664 out.go:304] Setting ErrFile to fd 2...
I0729 04:18:13.078468   16664 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:18:13.078592   16664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
I0729 04:18:13.078961   16664 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:18:13.079395   16664 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:18:13.079615   16664 build_images.go:133] succeeded building to: 
I0729 04:18:13.079619   16664 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image ls
functional_test.go:442: expected "localhost/my-image:functional-356000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image load --daemon docker.io/kicbase/echo-server:functional-356000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-356000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image load --daemon docker.io/kicbase/echo-server:functional-356000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-356000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-356000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image load --daemon docker.io/kicbase/echo-server:functional-356000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-356000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image save docker.io/kicbase/echo-server:functional-356000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-356000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-356000 docker-env) && out/minikube-darwin-arm64 status -p functional-356000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-356000 docker-env) && out/minikube-darwin-arm64 status -p functional-356000": exit status 1 (50.002542ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 update-context --alsologtostderr -v=2: exit status 83 (41.639583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:18:13.225516   16672 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:18:13.226400   16672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:18:13.226403   16672 out.go:304] Setting ErrFile to fd 2...
	I0729 04:18:13.226406   16672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:18:13.226527   16672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:18:13.226711   16672 mustload.go:65] Loading cluster: functional-356000
	I0729 04:18:13.226909   16672 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:18:13.230432   16672 out.go:177] * The control-plane node functional-356000 host is not running: state=Stopped
	I0729 04:18:13.234167   16672 out.go:177]   To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-356000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-356000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-356000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 update-context --alsologtostderr -v=2: exit status 83 (41.559375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:18:13.309054   16676 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:18:13.309203   16676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:18:13.309206   16676 out.go:304] Setting ErrFile to fd 2...
	I0729 04:18:13.309208   16676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:18:13.309353   16676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:18:13.309564   16676 mustload.go:65] Loading cluster: functional-356000
	I0729 04:18:13.309777   16676 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:18:13.314145   16676 out.go:177] * The control-plane node functional-356000 host is not running: state=Stopped
	I0729 04:18:13.318276   16676 out.go:177]   To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-356000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-356000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-356000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 update-context --alsologtostderr -v=2: exit status 83 (41.308ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:18:13.267311   16674 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:18:13.267468   16674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:18:13.267472   16674 out.go:304] Setting ErrFile to fd 2...
	I0729 04:18:13.267474   16674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:18:13.267604   16674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:18:13.267809   16674 mustload.go:65] Loading cluster: functional-356000
	I0729 04:18:13.268008   16674 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:18:13.272279   16674 out.go:177] * The control-plane node functional-356000 host is not running: state=Stopped
	I0729 04:18:13.276297   16674 out.go:177]   To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-356000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-356000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-356000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036016208s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-793000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-793000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.966907667s)

                                                
                                                
-- stdout --
	* [ha-793000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-793000" primary control-plane node in "ha-793000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-793000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:20:13.677646   16757 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:20:13.678007   16757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:20:13.678012   16757 out.go:304] Setting ErrFile to fd 2...
	I0729 04:20:13.678014   16757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:20:13.678219   16757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:20:13.679604   16757 out.go:298] Setting JSON to false
	I0729 04:20:13.696076   16757 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8382,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:20:13.696147   16757 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:20:13.701603   16757 out.go:177] * [ha-793000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:20:13.709734   16757 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:20:13.709768   16757 notify.go:220] Checking for updates...
	I0729 04:20:13.715721   16757 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:20:13.718769   16757 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:20:13.721716   16757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:20:13.724818   16757 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:20:13.727723   16757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:20:13.729121   16757 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:20:13.733725   16757 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:20:13.740597   16757 start.go:297] selected driver: qemu2
	I0729 04:20:13.740607   16757 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:20:13.740615   16757 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:20:13.742862   16757 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:20:13.745694   16757 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:20:13.748822   16757 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:20:13.748875   16757 cni.go:84] Creating CNI manager for ""
	I0729 04:20:13.748881   16757 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 04:20:13.748886   16757 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 04:20:13.748925   16757 start.go:340] cluster config:
	{Name:ha-793000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:20:13.752697   16757 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:20:13.759708   16757 out.go:177] * Starting "ha-793000" primary control-plane node in "ha-793000" cluster
	I0729 04:20:13.763727   16757 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:20:13.763745   16757 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:20:13.763754   16757 cache.go:56] Caching tarball of preloaded images
	I0729 04:20:13.763831   16757 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:20:13.763837   16757 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:20:13.764033   16757 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/ha-793000/config.json ...
	I0729 04:20:13.764046   16757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/ha-793000/config.json: {Name:mk71e8f661c41e33486340c09068806e7242d633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:20:13.764388   16757 start.go:360] acquireMachinesLock for ha-793000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:20:13.764422   16757 start.go:364] duration metric: took 28.209µs to acquireMachinesLock for "ha-793000"
	I0729 04:20:13.764433   16757 start.go:93] Provisioning new machine with config: &{Name:ha-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:20:13.764495   16757 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:20:13.772780   16757 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:20:13.790207   16757 start.go:159] libmachine.API.Create for "ha-793000" (driver="qemu2")
	I0729 04:20:13.790230   16757 client.go:168] LocalClient.Create starting
	I0729 04:20:13.790292   16757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:20:13.790325   16757 main.go:141] libmachine: Decoding PEM data...
	I0729 04:20:13.790334   16757 main.go:141] libmachine: Parsing certificate...
	I0729 04:20:13.790369   16757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:20:13.790391   16757 main.go:141] libmachine: Decoding PEM data...
	I0729 04:20:13.790401   16757 main.go:141] libmachine: Parsing certificate...
	I0729 04:20:13.790773   16757 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:20:13.959109   16757 main.go:141] libmachine: Creating SSH key...
	I0729 04:20:14.018924   16757 main.go:141] libmachine: Creating Disk image...
	I0729 04:20:14.018933   16757 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:20:14.019154   16757 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2
	I0729 04:20:14.034548   16757 main.go:141] libmachine: STDOUT: 
	I0729 04:20:14.034567   16757 main.go:141] libmachine: STDERR: 
	I0729 04:20:14.034614   16757 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2 +20000M
	I0729 04:20:14.048963   16757 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:20:14.048975   16757 main.go:141] libmachine: STDERR: 
	I0729 04:20:14.048990   16757 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2
	I0729 04:20:14.048992   16757 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:20:14.049002   16757 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:20:14.049026   16757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:c1:7c:de:81:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2
	I0729 04:20:14.050622   16757 main.go:141] libmachine: STDOUT: 
	I0729 04:20:14.050632   16757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:20:14.050649   16757 client.go:171] duration metric: took 260.420917ms to LocalClient.Create
	I0729 04:20:16.052772   16757 start.go:128] duration metric: took 2.288309583s to createHost
	I0729 04:20:16.052838   16757 start.go:83] releasing machines lock for "ha-793000", held for 2.288461584s
	W0729 04:20:16.052884   16757 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:20:16.067002   16757 out.go:177] * Deleting "ha-793000" in qemu2 ...
	W0729 04:20:16.096099   16757 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:20:16.096127   16757 start.go:729] Will try again in 5 seconds ...
	I0729 04:20:21.098235   16757 start.go:360] acquireMachinesLock for ha-793000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:20:21.098684   16757 start.go:364] duration metric: took 349.875µs to acquireMachinesLock for "ha-793000"
	I0729 04:20:21.098812   16757 start.go:93] Provisioning new machine with config: &{Name:ha-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:20:21.099122   16757 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:20:21.112201   16757 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:20:21.162865   16757 start.go:159] libmachine.API.Create for "ha-793000" (driver="qemu2")
	I0729 04:20:21.162903   16757 client.go:168] LocalClient.Create starting
	I0729 04:20:21.163012   16757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:20:21.163074   16757 main.go:141] libmachine: Decoding PEM data...
	I0729 04:20:21.163091   16757 main.go:141] libmachine: Parsing certificate...
	I0729 04:20:21.163182   16757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:20:21.163242   16757 main.go:141] libmachine: Decoding PEM data...
	I0729 04:20:21.163259   16757 main.go:141] libmachine: Parsing certificate...
	I0729 04:20:21.163971   16757 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:20:21.394738   16757 main.go:141] libmachine: Creating SSH key...
	I0729 04:20:21.550336   16757 main.go:141] libmachine: Creating Disk image...
	I0729 04:20:21.550344   16757 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:20:21.550596   16757 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2
	I0729 04:20:21.560121   16757 main.go:141] libmachine: STDOUT: 
	I0729 04:20:21.560138   16757 main.go:141] libmachine: STDERR: 
	I0729 04:20:21.560188   16757 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2 +20000M
	I0729 04:20:21.568139   16757 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:20:21.568151   16757 main.go:141] libmachine: STDERR: 
	I0729 04:20:21.568160   16757 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2
	I0729 04:20:21.568163   16757 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:20:21.568171   16757 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:20:21.568192   16757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:2d:d0:8b:12:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2
	I0729 04:20:21.569827   16757 main.go:141] libmachine: STDOUT: 
	I0729 04:20:21.569838   16757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:20:21.569848   16757 client.go:171] duration metric: took 406.9505ms to LocalClient.Create
	I0729 04:20:23.571971   16757 start.go:128] duration metric: took 2.472857875s to createHost
	I0729 04:20:23.572041   16757 start.go:83] releasing machines lock for "ha-793000", held for 2.47339375s
	W0729 04:20:23.572418   16757 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-793000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-793000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:20:23.583038   16757 out.go:177] 
	W0729 04:20:23.589233   16757 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:20:23.589267   16757 out.go:239] * 
	* 
	W0729 04:20:23.592016   16757 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:20:23.602039   16757 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-793000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (66.658542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (80.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.148584ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-793000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- rollout status deployment/busybox: exit status 1 (57.160459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.711542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.473666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.830834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.48825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.678417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.894833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.056125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.383792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.739708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.405625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.727708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.588292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.6785ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.183334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (29.557417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (80.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-793000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.268209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-793000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (30.076708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-793000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-793000 -v=7 --alsologtostderr: exit status 83 (43.039916ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-793000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-793000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:21:44.139023   16905 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:21:44.139597   16905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.139600   16905 out.go:304] Setting ErrFile to fd 2...
	I0729 04:21:44.139603   16905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.139768   16905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:21:44.140042   16905 mustload.go:65] Loading cluster: ha-793000
	I0729 04:21:44.140219   16905 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:21:44.145085   16905 out.go:177] * The control-plane node ha-793000 host is not running: state=Stopped
	I0729 04:21:44.148972   16905 out.go:177]   To start a cluster, run: "minikube start -p ha-793000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-793000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (30.218208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-793000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-793000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.16675ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-793000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-793000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-793000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (29.536791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-793000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-793000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-793000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-793000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-793000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-793000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-793000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-793000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (29.22375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status --output json -v=7 --alsologtostderr: exit status 7 (29.426834ms)

                                                
                                                
-- stdout --
	{"Name":"ha-793000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:21:44.344461   16917 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:21:44.344596   16917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.344599   16917 out.go:304] Setting ErrFile to fd 2...
	I0729 04:21:44.344601   16917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.344754   16917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:21:44.344875   16917 out.go:298] Setting JSON to true
	I0729 04:21:44.344884   16917 mustload.go:65] Loading cluster: ha-793000
	I0729 04:21:44.344947   16917 notify.go:220] Checking for updates...
	I0729 04:21:44.345098   16917 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:21:44.345104   16917 status.go:255] checking status of ha-793000 ...
	I0729 04:21:44.345306   16917 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:21:44.345310   16917 status.go:343] host is not running, skipping remaining checks
	I0729 04:21:44.345312   16917 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-793000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (29.585416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 node stop m02 -v=7 --alsologtostderr: exit status 85 (48.697083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:21:44.405219   16921 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:21:44.405801   16921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.405805   16921 out.go:304] Setting ErrFile to fd 2...
	I0729 04:21:44.405808   16921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.406008   16921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:21:44.406245   16921 mustload.go:65] Loading cluster: ha-793000
	I0729 04:21:44.406449   16921 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:21:44.411166   16921 out.go:177] 
	W0729 04:21:44.415216   16921 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0729 04:21:44.415221   16921 out.go:239] * 
	* 
	W0729 04:21:44.417440   16921 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:21:44.420237   16921 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-793000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (29.251917ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:21:44.452745   16923 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:21:44.452897   16923 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.452900   16923 out.go:304] Setting ErrFile to fd 2...
	I0729 04:21:44.452903   16923 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.453034   16923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:21:44.453151   16923 out.go:298] Setting JSON to false
	I0729 04:21:44.453160   16923 mustload.go:65] Loading cluster: ha-793000
	I0729 04:21:44.453225   16923 notify.go:220] Checking for updates...
	I0729 04:21:44.453350   16923 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:21:44.453357   16923 status.go:255] checking status of ha-793000 ...
	I0729 04:21:44.453571   16923 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:21:44.453575   16923 status.go:343] host is not running, skipping remaining checks
	I0729 04:21:44.453577   16923 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr": ha-793000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr": ha-793000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr": ha-793000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr": ha-793000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (29.52125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-793000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-793000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-793000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-793000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (29.991875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (47.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 node start m02 -v=7 --alsologtostderr: exit status 85 (49.639375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:21:44.589160   16932 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:21:44.589836   16932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.589840   16932 out.go:304] Setting ErrFile to fd 2...
	I0729 04:21:44.589843   16932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.590007   16932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:21:44.590216   16932 mustload.go:65] Loading cluster: ha-793000
	I0729 04:21:44.590415   16932 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:21:44.594226   16932 out.go:177] 
	W0729 04:21:44.598274   16932 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0729 04:21:44.598278   16932 out.go:239] * 
	* 
	W0729 04:21:44.600570   16932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:21:44.605181   16932 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0729 04:21:44.589160   16932 out.go:291] Setting OutFile to fd 1 ...
I0729 04:21:44.589836   16932 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:21:44.589840   16932 out.go:304] Setting ErrFile to fd 2...
I0729 04:21:44.589843   16932 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:21:44.590007   16932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
I0729 04:21:44.590216   16932 mustload.go:65] Loading cluster: ha-793000
I0729 04:21:44.590415   16932 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:21:44.594226   16932 out.go:177] 
W0729 04:21:44.598274   16932 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0729 04:21:44.598278   16932 out.go:239] * 
* 
W0729 04:21:44.600570   16932 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 04:21:44.605181   16932 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-793000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (29.764958ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:21:44.638336   16934 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:21:44.638467   16934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.638471   16934 out.go:304] Setting ErrFile to fd 2...
	I0729 04:21:44.638473   16934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:44.638603   16934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:21:44.638740   16934 out.go:298] Setting JSON to false
	I0729 04:21:44.638753   16934 mustload.go:65] Loading cluster: ha-793000
	I0729 04:21:44.638822   16934 notify.go:220] Checking for updates...
	I0729 04:21:44.638953   16934 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:21:44.638959   16934 status.go:255] checking status of ha-793000 ...
	I0729 04:21:44.639163   16934 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:21:44.639167   16934 status.go:343] host is not running, skipping remaining checks
	I0729 04:21:44.639169   16934 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (73.349709ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:21:45.314054   16938 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:21:45.314271   16938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:45.314275   16938 out.go:304] Setting ErrFile to fd 2...
	I0729 04:21:45.314278   16938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:45.314438   16938 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:21:45.314602   16938 out.go:298] Setting JSON to false
	I0729 04:21:45.314615   16938 mustload.go:65] Loading cluster: ha-793000
	I0729 04:21:45.314650   16938 notify.go:220] Checking for updates...
	I0729 04:21:45.314868   16938 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:21:45.314877   16938 status.go:255] checking status of ha-793000 ...
	I0729 04:21:45.315166   16938 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:21:45.315171   16938 status.go:343] host is not running, skipping remaining checks
	I0729 04:21:45.315174   16938 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (70.432417ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:21:47.582087   16940 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:21:47.582302   16940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:47.582307   16940 out.go:304] Setting ErrFile to fd 2...
	I0729 04:21:47.582311   16940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:47.582495   16940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:21:47.582675   16940 out.go:298] Setting JSON to false
	I0729 04:21:47.582690   16940 mustload.go:65] Loading cluster: ha-793000
	I0729 04:21:47.582726   16940 notify.go:220] Checking for updates...
	I0729 04:21:47.582976   16940 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:21:47.582984   16940 status.go:255] checking status of ha-793000 ...
	I0729 04:21:47.583307   16940 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:21:47.583312   16940 status.go:343] host is not running, skipping remaining checks
	I0729 04:21:47.583315   16940 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (74.270458ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:21:50.236300   16944 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:21:50.236493   16944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:50.236497   16944 out.go:304] Setting ErrFile to fd 2...
	I0729 04:21:50.236500   16944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:50.236675   16944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:21:50.236838   16944 out.go:298] Setting JSON to false
	I0729 04:21:50.236851   16944 mustload.go:65] Loading cluster: ha-793000
	I0729 04:21:50.236891   16944 notify.go:220] Checking for updates...
	I0729 04:21:50.237102   16944 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:21:50.237110   16944 status.go:255] checking status of ha-793000 ...
	I0729 04:21:50.237385   16944 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:21:50.237390   16944 status.go:343] host is not running, skipping remaining checks
	I0729 04:21:50.237393   16944 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (75.130959ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:21:55.039196   16948 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:21:55.039388   16948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:55.039392   16948 out.go:304] Setting ErrFile to fd 2...
	I0729 04:21:55.039396   16948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:55.039592   16948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:21:55.039761   16948 out.go:298] Setting JSON to false
	I0729 04:21:55.039773   16948 mustload.go:65] Loading cluster: ha-793000
	I0729 04:21:55.039812   16948 notify.go:220] Checking for updates...
	I0729 04:21:55.040014   16948 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:21:55.040022   16948 status.go:255] checking status of ha-793000 ...
	I0729 04:21:55.040284   16948 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:21:55.040289   16948 status.go:343] host is not running, skipping remaining checks
	I0729 04:21:55.040292   16948 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (74.024667ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:21:59.205227   16954 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:21:59.205427   16954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:59.205431   16954 out.go:304] Setting ErrFile to fd 2...
	I0729 04:21:59.205434   16954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:21:59.205648   16954 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:21:59.205803   16954 out.go:298] Setting JSON to false
	I0729 04:21:59.205815   16954 mustload.go:65] Loading cluster: ha-793000
	I0729 04:21:59.205857   16954 notify.go:220] Checking for updates...
	I0729 04:21:59.206077   16954 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:21:59.206088   16954 status.go:255] checking status of ha-793000 ...
	I0729 04:21:59.206369   16954 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:21:59.206374   16954 status.go:343] host is not running, skipping remaining checks
	I0729 04:21:59.206377   16954 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (72.673209ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:22:06.854361   16969 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:22:06.854576   16969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:06.854580   16969 out.go:304] Setting ErrFile to fd 2...
	I0729 04:22:06.854583   16969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:06.854776   16969 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:22:06.854951   16969 out.go:298] Setting JSON to false
	I0729 04:22:06.854965   16969 mustload.go:65] Loading cluster: ha-793000
	I0729 04:22:06.855020   16969 notify.go:220] Checking for updates...
	I0729 04:22:06.855247   16969 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:22:06.855257   16969 status.go:255] checking status of ha-793000 ...
	I0729 04:22:06.855558   16969 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:22:06.855563   16969 status.go:343] host is not running, skipping remaining checks
	I0729 04:22:06.855566   16969 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (74.153167ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:22:16.927566   16975 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:22:16.927778   16975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:16.927783   16975 out.go:304] Setting ErrFile to fd 2...
	I0729 04:22:16.927786   16975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:16.927993   16975 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:22:16.928139   16975 out.go:298] Setting JSON to false
	I0729 04:22:16.928161   16975 mustload.go:65] Loading cluster: ha-793000
	I0729 04:22:16.928197   16975 notify.go:220] Checking for updates...
	I0729 04:22:16.928407   16975 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:22:16.928416   16975 status.go:255] checking status of ha-793000 ...
	I0729 04:22:16.928674   16975 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:22:16.928679   16975 status.go:343] host is not running, skipping remaining checks
	I0729 04:22:16.928682   16975 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (71.974875ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:22:31.818962   16980 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:22:31.819240   16980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:31.819244   16980 out.go:304] Setting ErrFile to fd 2...
	I0729 04:22:31.819248   16980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:31.819461   16980 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:22:31.819650   16980 out.go:298] Setting JSON to false
	I0729 04:22:31.819663   16980 mustload.go:65] Loading cluster: ha-793000
	I0729 04:22:31.819708   16980 notify.go:220] Checking for updates...
	I0729 04:22:31.819961   16980 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:22:31.819971   16980 status.go:255] checking status of ha-793000 ...
	I0729 04:22:31.820259   16980 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:22:31.820263   16980 status.go:343] host is not running, skipping remaining checks
	I0729 04:22:31.820266   16980 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (33.865583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (47.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-793000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-793000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-793000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-793000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-793000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-793000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-793000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-793000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (29.921625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-793000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-793000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-793000 -v=7 --alsologtostderr: (1.856204s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-793000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-793000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.217860125s)

                                                
                                                
-- stdout --
	* [ha-793000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-793000" primary control-plane node in "ha-793000" cluster
	* Restarting existing qemu2 VM for "ha-793000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-793000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:22:33.883757   17003 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:22:33.883935   17003 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:33.883942   17003 out.go:304] Setting ErrFile to fd 2...
	I0729 04:22:33.883945   17003 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:33.884152   17003 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:22:33.885433   17003 out.go:298] Setting JSON to false
	I0729 04:22:33.905248   17003 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8522,"bootTime":1722243631,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:22:33.905322   17003 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:22:33.910433   17003 out.go:177] * [ha-793000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:22:33.916449   17003 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:22:33.916482   17003 notify.go:220] Checking for updates...
	I0729 04:22:33.923351   17003 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:22:33.924729   17003 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:22:33.927339   17003 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:22:33.930408   17003 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:22:33.933393   17003 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:22:33.936643   17003 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:22:33.936700   17003 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:22:33.941380   17003 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:22:33.948317   17003 start.go:297] selected driver: qemu2
	I0729 04:22:33.948324   17003 start.go:901] validating driver "qemu2" against &{Name:ha-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:22:33.948375   17003 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:22:33.950880   17003 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:22:33.950926   17003 cni.go:84] Creating CNI manager for ""
	I0729 04:22:33.950932   17003 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:22:33.950969   17003 start.go:340] cluster config:
	{Name:ha-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-793000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:22:33.954672   17003 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:22:33.961365   17003 out.go:177] * Starting "ha-793000" primary control-plane node in "ha-793000" cluster
	I0729 04:22:33.965379   17003 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:22:33.965402   17003 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:22:33.965410   17003 cache.go:56] Caching tarball of preloaded images
	I0729 04:22:33.965470   17003 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:22:33.965475   17003 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:22:33.965530   17003 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/ha-793000/config.json ...
	I0729 04:22:33.965953   17003 start.go:360] acquireMachinesLock for ha-793000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:22:33.965990   17003 start.go:364] duration metric: took 30.208µs to acquireMachinesLock for "ha-793000"
	I0729 04:22:33.966001   17003 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:22:33.966015   17003 fix.go:54] fixHost starting: 
	I0729 04:22:33.966145   17003 fix.go:112] recreateIfNeeded on ha-793000: state=Stopped err=<nil>
	W0729 04:22:33.966154   17003 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:22:33.973394   17003 out.go:177] * Restarting existing qemu2 VM for "ha-793000" ...
	I0729 04:22:33.977350   17003 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:22:33.977396   17003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:2d:d0:8b:12:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2
	I0729 04:22:33.979591   17003 main.go:141] libmachine: STDOUT: 
	I0729 04:22:33.979611   17003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:22:33.979642   17003 fix.go:56] duration metric: took 13.627625ms for fixHost
	I0729 04:22:33.979647   17003 start.go:83] releasing machines lock for "ha-793000", held for 13.652542ms
	W0729 04:22:33.979652   17003 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:22:33.979692   17003 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:22:33.979697   17003 start.go:729] Will try again in 5 seconds ...
	I0729 04:22:38.981701   17003 start.go:360] acquireMachinesLock for ha-793000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:22:38.982056   17003 start.go:364] duration metric: took 269.458µs to acquireMachinesLock for "ha-793000"
	I0729 04:22:38.982174   17003 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:22:38.982192   17003 fix.go:54] fixHost starting: 
	I0729 04:22:38.982902   17003 fix.go:112] recreateIfNeeded on ha-793000: state=Stopped err=<nil>
	W0729 04:22:38.982928   17003 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:22:38.991247   17003 out.go:177] * Restarting existing qemu2 VM for "ha-793000" ...
	I0729 04:22:38.995287   17003 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:22:38.995480   17003 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:2d:d0:8b:12:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2
	I0729 04:22:39.002980   17003 main.go:141] libmachine: STDOUT: 
	I0729 04:22:39.003029   17003 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:22:39.003088   17003 fix.go:56] duration metric: took 20.902958ms for fixHost
	I0729 04:22:39.003110   17003 start.go:83] releasing machines lock for "ha-793000", held for 21.032417ms
	W0729 04:22:39.003267   17003 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-793000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-793000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:22:39.011231   17003 out.go:177] 
	W0729 04:22:39.015317   17003 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:22:39.015345   17003 out.go:239] * 
	* 
	W0729 04:22:39.017249   17003 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:22:39.024251   17003 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-793000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-793000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (32.292125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.152667ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-793000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-793000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:22:39.162002   17015 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:22:39.162432   17015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:39.162436   17015 out.go:304] Setting ErrFile to fd 2...
	I0729 04:22:39.162438   17015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:39.162620   17015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:22:39.162831   17015 mustload.go:65] Loading cluster: ha-793000
	I0729 04:22:39.163012   17015 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:22:39.167817   17015 out.go:177] * The control-plane node ha-793000 host is not running: state=Stopped
	I0729 04:22:39.171835   17015 out.go:177]   To start a cluster, run: "minikube start -p ha-793000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-793000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (29.849708ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:22:39.204983   17017 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:22:39.205131   17017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:39.205134   17017 out.go:304] Setting ErrFile to fd 2...
	I0729 04:22:39.205136   17017 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:39.205269   17017 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:22:39.205377   17017 out.go:298] Setting JSON to false
	I0729 04:22:39.205386   17017 mustload.go:65] Loading cluster: ha-793000
	I0729 04:22:39.205452   17017 notify.go:220] Checking for updates...
	I0729 04:22:39.205601   17017 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:22:39.205617   17017 status.go:255] checking status of ha-793000 ...
	I0729 04:22:39.205816   17017 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:22:39.205820   17017 status.go:343] host is not running, skipping remaining checks
	I0729 04:22:39.205822   17017 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (30.043ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-793000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-793000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-793000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-793000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (29.059584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-793000 stop -v=7 --alsologtostderr: (3.46974625s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr: exit status 7 (62.997416ms)

                                                
                                                
-- stdout --
	ha-793000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:22:42.842776   17044 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:22:42.842945   17044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:42.842949   17044 out.go:304] Setting ErrFile to fd 2...
	I0729 04:22:42.842953   17044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:42.843119   17044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:22:42.843267   17044 out.go:298] Setting JSON to false
	I0729 04:22:42.843279   17044 mustload.go:65] Loading cluster: ha-793000
	I0729 04:22:42.843322   17044 notify.go:220] Checking for updates...
	I0729 04:22:42.843555   17044 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:22:42.843563   17044 status.go:255] checking status of ha-793000 ...
	I0729 04:22:42.843854   17044 status.go:330] ha-793000 host status = "Stopped" (err=<nil>)
	I0729 04:22:42.843859   17044 status.go:343] host is not running, skipping remaining checks
	I0729 04:22:42.843862   17044 status.go:257] ha-793000 status: &{Name:ha-793000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr": ha-793000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr": ha-793000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-793000 status -v=7 --alsologtostderr": ha-793000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (31.774583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-793000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-793000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.184993333s)

                                                
                                                
-- stdout --
	* [ha-793000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-793000" primary control-plane node in "ha-793000" cluster
	* Restarting existing qemu2 VM for "ha-793000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-793000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:22:42.905030   17048 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:22:42.905165   17048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:42.905168   17048 out.go:304] Setting ErrFile to fd 2...
	I0729 04:22:42.905171   17048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:42.905333   17048 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:22:42.906363   17048 out.go:298] Setting JSON to false
	I0729 04:22:42.922634   17048 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8531,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:22:42.922700   17048 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:22:42.926923   17048 out.go:177] * [ha-793000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:22:42.934712   17048 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:22:42.934798   17048 notify.go:220] Checking for updates...
	I0729 04:22:42.940787   17048 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:22:42.943685   17048 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:22:42.946786   17048 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:22:42.949781   17048 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:22:42.952705   17048 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:22:42.956074   17048 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:22:42.956352   17048 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:22:42.960723   17048 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:22:42.967746   17048 start.go:297] selected driver: qemu2
	I0729 04:22:42.967751   17048 start.go:901] validating driver "qemu2" against &{Name:ha-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:22:42.967824   17048 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:22:42.970111   17048 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:22:42.970162   17048 cni.go:84] Creating CNI manager for ""
	I0729 04:22:42.970170   17048 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:22:42.970208   17048 start.go:340] cluster config:
	{Name:ha-793000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-793000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:22:42.973755   17048 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:22:42.983747   17048 out.go:177] * Starting "ha-793000" primary control-plane node in "ha-793000" cluster
	I0729 04:22:42.987737   17048 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:22:42.987752   17048 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:22:42.987760   17048 cache.go:56] Caching tarball of preloaded images
	I0729 04:22:42.987820   17048 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:22:42.987826   17048 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:22:42.987895   17048 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/ha-793000/config.json ...
	I0729 04:22:42.988323   17048 start.go:360] acquireMachinesLock for ha-793000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:22:42.988352   17048 start.go:364] duration metric: took 22.667µs to acquireMachinesLock for "ha-793000"
	I0729 04:22:42.988362   17048 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:22:42.988366   17048 fix.go:54] fixHost starting: 
	I0729 04:22:42.988486   17048 fix.go:112] recreateIfNeeded on ha-793000: state=Stopped err=<nil>
	W0729 04:22:42.988494   17048 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:22:42.996725   17048 out.go:177] * Restarting existing qemu2 VM for "ha-793000" ...
	I0729 04:22:43.000588   17048 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:22:43.000616   17048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:2d:d0:8b:12:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2
	I0729 04:22:43.002653   17048 main.go:141] libmachine: STDOUT: 
	I0729 04:22:43.002671   17048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:22:43.002697   17048 fix.go:56] duration metric: took 14.330125ms for fixHost
	I0729 04:22:43.002701   17048 start.go:83] releasing machines lock for "ha-793000", held for 14.345208ms
	W0729 04:22:43.002707   17048 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:22:43.002740   17048 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:22:43.002744   17048 start.go:729] Will try again in 5 seconds ...
	I0729 04:22:48.004834   17048 start.go:360] acquireMachinesLock for ha-793000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:22:48.005345   17048 start.go:364] duration metric: took 383.083µs to acquireMachinesLock for "ha-793000"
	I0729 04:22:48.005499   17048 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:22:48.005523   17048 fix.go:54] fixHost starting: 
	I0729 04:22:48.006276   17048 fix.go:112] recreateIfNeeded on ha-793000: state=Stopped err=<nil>
	W0729 04:22:48.006305   17048 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:22:48.013714   17048 out.go:177] * Restarting existing qemu2 VM for "ha-793000" ...
	I0729 04:22:48.017727   17048 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:22:48.017985   17048 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:2d:d0:8b:12:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/ha-793000/disk.qcow2
	I0729 04:22:48.027517   17048 main.go:141] libmachine: STDOUT: 
	I0729 04:22:48.027582   17048 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:22:48.027691   17048 fix.go:56] duration metric: took 22.171791ms for fixHost
	I0729 04:22:48.027707   17048 start.go:83] releasing machines lock for "ha-793000", held for 22.338833ms
	W0729 04:22:48.027945   17048 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-793000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-793000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:22:48.033713   17048 out.go:177] 
	W0729 04:22:48.036825   17048 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:22:48.036851   17048 out.go:239] * 
	* 
	W0729 04:22:48.039444   17048 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:22:48.048676   17048 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-793000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (68.506125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-793000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-793000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-793000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-793000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (29.334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-793000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-793000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.294125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-793000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-793000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:22:48.239187   17068 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:22:48.239348   17068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:48.239351   17068 out.go:304] Setting ErrFile to fd 2...
	I0729 04:22:48.239353   17068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:22:48.239487   17068 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:22:48.239735   17068 mustload.go:65] Loading cluster: ha-793000
	I0729 04:22:48.239901   17068 config.go:182] Loaded profile config "ha-793000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:22:48.244682   17068 out.go:177] * The control-plane node ha-793000 host is not running: state=Stopped
	I0729 04:22:48.248699   17068 out.go:177]   To start a cluster, run: "minikube start -p ha-793000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-793000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (29.524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-793000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-793000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-793000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-793000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-793000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-793000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-793000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-793000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-793000 -n ha-793000: exit status 7 (29.754666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-793000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-577000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-577000 --driver=qemu2 : exit status 80 (9.958366209s)

                                                
                                                
-- stdout --
	* [image-577000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-577000" primary control-plane node in "image-577000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-577000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-577000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-577000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-577000 -n image-577000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-577000 -n image-577000: exit status 7 (67.009333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-577000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-312000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-312000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.86944075s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"02db8699-4a9f-46e8-b266-a2eeaae77a8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-312000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"91440483-40a7-4c80-bc0b-a776fc775457","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19341"}}
	{"specversion":"1.0","id":"28d9fda2-bf76-4965-ba0e-59f6c952b90c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig"}}
	{"specversion":"1.0","id":"d02dd01f-26e4-495c-9e29-23e9d4b9ab03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"7270fe0b-d987-450b-a0e2-357f2302db1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6fcdbf87-7614-4b3b-82cc-a8a64b560a5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube"}}
	{"specversion":"1.0","id":"67091623-3036-4487-a9b1-ec43a4d9358c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1f45e6cd-c1eb-4e3f-aa1f-cc28c61382da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"089c7ec8-ddca-4f93-be4d-4003e1457303","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"b7fec5d3-f198-4d2f-b785-4845c7e002fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-312000\" primary control-plane node in \"json-output-312000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6bc2d982-25bf-4e6e-abcf-e099a6099b9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"0fb32c3d-8591-459a-96b9-06d201a40644","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-312000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3e478d0-7f73-482c-a0a9-6f1de97db35e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"641c02d8-3316-44a1-99fa-2897686b8178","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"89346e5d-061f-4421-9e04-925dd3717ff5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-312000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"672267a7-3ab2-4d99-9351-46c765e1b0c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"6d46779d-091a-4568-8530-0c3d40d2847f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-312000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-312000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-312000 --output=json --user=testUser: exit status 83 (77.988375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb7dc430-a9ff-4960-9a7a-4d0e39086d28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-312000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"747fd5f1-a7aa-4e9b-ac4c-ad67ba161869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-312000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-312000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-312000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-312000 --output=json --user=testUser: exit status 83 (46.1605ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-312000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-312000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-312000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-312000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-037000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-037000 --driver=qemu2 : exit status 80 (9.889575792s)

                                                
                                                
-- stdout --
	* [first-037000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-037000" primary control-plane node in "first-037000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-037000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-037000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-037000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 04:23:22.436953 -0700 PDT m=+433.739370626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-039000 -n second-039000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-039000 -n second-039000: exit status 85 (79.670875ms)

                                                
                                                
-- stdout --
	* Profile "second-039000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-039000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-039000" host is not running, skipping log retrieval (state="* Profile \"second-039000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-039000\"")
helpers_test.go:175: Cleaning up "second-039000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-039000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-29 04:23:22.622733 -0700 PDT m=+433.925154960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-037000 -n first-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-037000 -n first-037000: exit status 7 (29.377375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-037000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-037000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-037000
--- FAIL: TestMinikubeProfile (10.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-720000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-720000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.013230875s)

                                                
                                                
-- stdout --
	* [mount-start-1-720000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-720000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-720000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-720000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-720000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-720000 -n mount-start-1-720000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-720000 -n mount-start-1-720000: exit status 7 (69.98675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-720000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.08s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-301000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-301000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.7903625s)

                                                
                                                
-- stdout --
	* [multinode-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-301000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:23:33.017841   17245 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:23:33.018002   17245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:23:33.018005   17245 out.go:304] Setting ErrFile to fd 2...
	I0729 04:23:33.018008   17245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:23:33.018133   17245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:23:33.019227   17245 out.go:298] Setting JSON to false
	I0729 04:23:33.035265   17245 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8582,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:23:33.035335   17245 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:23:33.040444   17245 out.go:177] * [multinode-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:23:33.048565   17245 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:23:33.048682   17245 notify.go:220] Checking for updates...
	I0729 04:23:33.056407   17245 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:23:33.059481   17245 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:23:33.063469   17245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:23:33.066460   17245 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:23:33.069477   17245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:23:33.072570   17245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:23:33.076517   17245 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:23:33.083366   17245 start.go:297] selected driver: qemu2
	I0729 04:23:33.083374   17245 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:23:33.083382   17245 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:23:33.085695   17245 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:23:33.089448   17245 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:23:33.092575   17245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:23:33.092590   17245 cni.go:84] Creating CNI manager for ""
	I0729 04:23:33.092595   17245 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 04:23:33.092602   17245 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 04:23:33.092627   17245 start.go:340] cluster config:
	{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:23:33.096302   17245 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:23:33.105506   17245 out.go:177] * Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	I0729 04:23:33.109425   17245 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:23:33.109443   17245 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:23:33.109457   17245 cache.go:56] Caching tarball of preloaded images
	I0729 04:23:33.109530   17245 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:23:33.109536   17245 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:23:33.109767   17245 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/multinode-301000/config.json ...
	I0729 04:23:33.109778   17245 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/multinode-301000/config.json: {Name:mk08083cfa9173b02c48b386af24184ba3a21904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:23:33.110021   17245 start.go:360] acquireMachinesLock for multinode-301000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:23:33.110057   17245 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "multinode-301000"
	I0729 04:23:33.110070   17245 start.go:93] Provisioning new machine with config: &{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:23:33.110099   17245 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:23:33.118422   17245 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:23:33.136637   17245 start.go:159] libmachine.API.Create for "multinode-301000" (driver="qemu2")
	I0729 04:23:33.136663   17245 client.go:168] LocalClient.Create starting
	I0729 04:23:33.136723   17245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:23:33.136752   17245 main.go:141] libmachine: Decoding PEM data...
	I0729 04:23:33.136761   17245 main.go:141] libmachine: Parsing certificate...
	I0729 04:23:33.136816   17245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:23:33.136840   17245 main.go:141] libmachine: Decoding PEM data...
	I0729 04:23:33.136849   17245 main.go:141] libmachine: Parsing certificate...
	I0729 04:23:33.137208   17245 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:23:33.289605   17245 main.go:141] libmachine: Creating SSH key...
	I0729 04:23:33.346208   17245 main.go:141] libmachine: Creating Disk image...
	I0729 04:23:33.346212   17245 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:23:33.346435   17245 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2
	I0729 04:23:33.355622   17245 main.go:141] libmachine: STDOUT: 
	I0729 04:23:33.355640   17245 main.go:141] libmachine: STDERR: 
	I0729 04:23:33.355693   17245 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2 +20000M
	I0729 04:23:33.363559   17245 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:23:33.363587   17245 main.go:141] libmachine: STDERR: 
	I0729 04:23:33.363603   17245 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2
	I0729 04:23:33.363607   17245 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:23:33.363616   17245 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:23:33.363642   17245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:6e:5e:0d:85:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2
	I0729 04:23:33.365285   17245 main.go:141] libmachine: STDOUT: 
	I0729 04:23:33.365302   17245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:23:33.365320   17245 client.go:171] duration metric: took 228.6575ms to LocalClient.Create
	I0729 04:23:35.367455   17245 start.go:128] duration metric: took 2.2573845s to createHost
	I0729 04:23:35.367534   17245 start.go:83] releasing machines lock for "multinode-301000", held for 2.257522292s
	W0729 04:23:35.367609   17245 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:23:35.378911   17245 out.go:177] * Deleting "multinode-301000" in qemu2 ...
	W0729 04:23:35.412675   17245 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:23:35.412719   17245 start.go:729] Will try again in 5 seconds ...
	I0729 04:23:40.414876   17245 start.go:360] acquireMachinesLock for multinode-301000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:23:40.415274   17245 start.go:364] duration metric: took 316.917µs to acquireMachinesLock for "multinode-301000"
	I0729 04:23:40.415403   17245 start.go:93] Provisioning new machine with config: &{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:23:40.415664   17245 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:23:40.425125   17245 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:23:40.474871   17245 start.go:159] libmachine.API.Create for "multinode-301000" (driver="qemu2")
	I0729 04:23:40.474922   17245 client.go:168] LocalClient.Create starting
	I0729 04:23:40.475047   17245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:23:40.475115   17245 main.go:141] libmachine: Decoding PEM data...
	I0729 04:23:40.475131   17245 main.go:141] libmachine: Parsing certificate...
	I0729 04:23:40.475197   17245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:23:40.475243   17245 main.go:141] libmachine: Decoding PEM data...
	I0729 04:23:40.475262   17245 main.go:141] libmachine: Parsing certificate...
	I0729 04:23:40.475835   17245 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:23:40.636174   17245 main.go:141] libmachine: Creating SSH key...
	I0729 04:23:40.717919   17245 main.go:141] libmachine: Creating Disk image...
	I0729 04:23:40.717924   17245 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:23:40.718163   17245 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2
	I0729 04:23:40.727265   17245 main.go:141] libmachine: STDOUT: 
	I0729 04:23:40.727285   17245 main.go:141] libmachine: STDERR: 
	I0729 04:23:40.727324   17245 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2 +20000M
	I0729 04:23:40.735145   17245 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:23:40.735163   17245 main.go:141] libmachine: STDERR: 
	I0729 04:23:40.735172   17245 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2
	I0729 04:23:40.735176   17245 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:23:40.735190   17245 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:23:40.735214   17245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:47:fd:4e:17:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2
	I0729 04:23:40.736820   17245 main.go:141] libmachine: STDOUT: 
	I0729 04:23:40.736836   17245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:23:40.736849   17245 client.go:171] duration metric: took 261.928625ms to LocalClient.Create
	I0729 04:23:42.739037   17245 start.go:128] duration metric: took 2.323389917s to createHost
	I0729 04:23:42.739143   17245 start.go:83] releasing machines lock for "multinode-301000", held for 2.323899042s
	W0729 04:23:42.739595   17245 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:23:42.749197   17245 out.go:177] 
	W0729 04:23:42.754310   17245 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:23:42.754336   17245 out.go:239] * 
	* 
	W0729 04:23:42.756871   17245 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:23:42.766255   17245 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-301000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (71.042292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (100.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.161708ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-301000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- rollout status deployment/busybox: exit status 1 (55.5585ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.256333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.440708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.771709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.249417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.16175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.643709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.209833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.50625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.390292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.277583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.126583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.544208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.277708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.885209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.968916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.547042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (100.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.973542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.552625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-301000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-301000 -v 3 --alsologtostderr: exit status 83 (43.437083ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-301000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-301000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:23.463431   17425 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:23.463609   17425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:23.463613   17425 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:23.463615   17425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:23.463734   17425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:25:23.463959   17425 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:25:23.464156   17425 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:23.469063   17425 out.go:177] * The control-plane node multinode-301000 host is not running: state=Stopped
	I0729 04:25:23.473001   17425 out.go:177]   To start a cluster, run: "minikube start -p multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-301000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.507167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-301000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-301000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.203333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-301000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-301000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-301000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (30.520625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-301000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-301000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-301000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-301000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.807292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status --output json --alsologtostderr: exit status 7 (29.555917ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-301000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:23.670079   17437 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:23.670225   17437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:23.670228   17437 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:23.670231   17437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:23.670348   17437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:25:23.670474   17437 out.go:298] Setting JSON to true
	I0729 04:25:23.670494   17437 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:25:23.670534   17437 notify.go:220] Checking for updates...
	I0729 04:25:23.670686   17437 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:23.670693   17437 status.go:255] checking status of multinode-301000 ...
	I0729 04:25:23.670894   17437 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:25:23.670898   17437 status.go:343] host is not running, skipping remaining checks
	I0729 04:25:23.670900   17437 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-301000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.731916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 node stop m03: exit status 85 (46.756917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-301000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status: exit status 7 (29.756833ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr: exit status 7 (30.381709ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:23.807554   17445 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:23.807700   17445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:23.807703   17445 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:23.807705   17445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:23.807836   17445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:25:23.807958   17445 out.go:298] Setting JSON to false
	I0729 04:25:23.807970   17445 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:25:23.808024   17445 notify.go:220] Checking for updates...
	I0729 04:25:23.808175   17445 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:23.808182   17445 status.go:255] checking status of multinode-301000 ...
	I0729 04:25:23.808392   17445 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:25:23.808396   17445 status.go:343] host is not running, skipping remaining checks
	I0729 04:25:23.808399   17445 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr": multinode-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.174709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (55.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.058333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:23.866526   17449 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:23.866892   17449 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:23.866896   17449 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:23.866899   17449 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:23.867061   17449 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:25:23.867292   17449 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:25:23.867465   17449 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:23.872046   17449 out.go:177] 
	W0729 04:25:23.876062   17449 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0729 04:25:23.876067   17449 out.go:239] * 
	* 
	W0729 04:25:23.878321   17449 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:25:23.881970   17449 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0729 04:25:23.866526   17449 out.go:291] Setting OutFile to fd 1 ...
I0729 04:25:23.866892   17449 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:25:23.866896   17449 out.go:304] Setting ErrFile to fd 2...
I0729 04:25:23.866899   17449 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:25:23.867061   17449 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
I0729 04:25:23.867292   17449 mustload.go:65] Loading cluster: multinode-301000
I0729 04:25:23.867465   17449 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:25:23.872046   17449 out.go:177] 
W0729 04:25:23.876062   17449 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0729 04:25:23.876067   17449 out.go:239] * 
* 
W0729 04:25:23.878321   17449 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 04:25:23.881970   17449 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-301000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (29.979542ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:23.914796   17451 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:23.914951   17451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:23.914954   17451 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:23.914957   17451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:23.915084   17451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:25:23.915208   17451 out.go:298] Setting JSON to false
	I0729 04:25:23.915217   17451 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:25:23.915280   17451 notify.go:220] Checking for updates...
	I0729 04:25:23.915411   17451 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:23.915421   17451 status.go:255] checking status of multinode-301000 ...
	I0729 04:25:23.915622   17451 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:25:23.915626   17451 status.go:343] host is not running, skipping remaining checks
	I0729 04:25:23.915628   17451 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (75.296125ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:25.295693   17453 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:25.295940   17453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:25.295944   17453 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:25.295948   17453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:25.296143   17453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:25:25.296302   17453 out.go:298] Setting JSON to false
	I0729 04:25:25.296316   17453 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:25:25.296369   17453 notify.go:220] Checking for updates...
	I0729 04:25:25.296612   17453 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:25.296620   17453 status.go:255] checking status of multinode-301000 ...
	I0729 04:25:25.296933   17453 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:25:25.296938   17453 status.go:343] host is not running, skipping remaining checks
	I0729 04:25:25.296941   17453 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (73.749708ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:26.647950   17455 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:26.648124   17455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:26.648129   17455 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:26.648132   17455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:26.648310   17455 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:25:26.648465   17455 out.go:298] Setting JSON to false
	I0729 04:25:26.648477   17455 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:25:26.648514   17455 notify.go:220] Checking for updates...
	I0729 04:25:26.648748   17455 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:26.648758   17455 status.go:255] checking status of multinode-301000 ...
	I0729 04:25:26.649062   17455 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:25:26.649067   17455 status.go:343] host is not running, skipping remaining checks
	I0729 04:25:26.649070   17455 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (72.609792ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:29.804083   17463 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:29.804278   17463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:29.804283   17463 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:29.804286   17463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:29.804442   17463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:25:29.804619   17463 out.go:298] Setting JSON to false
	I0729 04:25:29.804631   17463 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:25:29.804679   17463 notify.go:220] Checking for updates...
	I0729 04:25:29.804894   17463 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:29.804905   17463 status.go:255] checking status of multinode-301000 ...
	I0729 04:25:29.805201   17463 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:25:29.805206   17463 status.go:343] host is not running, skipping remaining checks
	I0729 04:25:29.805209   17463 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (74.439875ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:32.663682   17471 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:32.663888   17471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:32.663893   17471 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:32.663896   17471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:32.664066   17471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:25:32.664241   17471 out.go:298] Setting JSON to false
	I0729 04:25:32.664255   17471 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:25:32.664297   17471 notify.go:220] Checking for updates...
	I0729 04:25:32.664552   17471 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:32.664561   17471 status.go:255] checking status of multinode-301000 ...
	I0729 04:25:32.664869   17471 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:25:32.664874   17471 status.go:343] host is not running, skipping remaining checks
	I0729 04:25:32.664877   17471 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (73.217041ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:39.520015   17479 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:39.520247   17479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:39.520252   17479 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:39.520255   17479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:39.520449   17479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:25:39.520636   17479 out.go:298] Setting JSON to false
	I0729 04:25:39.520650   17479 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:25:39.520695   17479 notify.go:220] Checking for updates...
	I0729 04:25:39.520924   17479 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:39.520934   17479 status.go:255] checking status of multinode-301000 ...
	I0729 04:25:39.521238   17479 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:25:39.521243   17479 status.go:343] host is not running, skipping remaining checks
	I0729 04:25:39.521246   17479 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (73.339959ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:25:48.957570   17495 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:25:48.957776   17495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:48.957780   17495 out.go:304] Setting ErrFile to fd 2...
	I0729 04:25:48.957783   17495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:25:48.957950   17495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:25:48.958110   17495 out.go:298] Setting JSON to false
	I0729 04:25:48.958123   17495 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:25:48.958173   17495 notify.go:220] Checking for updates...
	I0729 04:25:48.958368   17495 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:25:48.958377   17495 status.go:255] checking status of multinode-301000 ...
	I0729 04:25:48.958655   17495 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:25:48.958661   17495 status.go:343] host is not running, skipping remaining checks
	I0729 04:25:48.958663   17495 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (75.774084ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:03.881942   17507 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:03.882165   17507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:03.882170   17507 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:03.882174   17507 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:03.882361   17507 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:26:03.882528   17507 out.go:298] Setting JSON to false
	I0729 04:26:03.882542   17507 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:26:03.882594   17507 notify.go:220] Checking for updates...
	I0729 04:26:03.882843   17507 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:03.882856   17507 status.go:255] checking status of multinode-301000 ...
	I0729 04:26:03.883159   17507 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:26:03.883164   17507 status.go:343] host is not running, skipping remaining checks
	I0729 04:26:03.883167   17507 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (74.162625ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:18.998469   17524 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:18.998677   17524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:18.998682   17524 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:18.998685   17524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:18.998858   17524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:26:18.999018   17524 out.go:298] Setting JSON to false
	I0729 04:26:18.999031   17524 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:26:18.999071   17524 notify.go:220] Checking for updates...
	I0729 04:26:18.999310   17524 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:18.999319   17524 status.go:255] checking status of multinode-301000 ...
	I0729 04:26:18.999613   17524 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:26:18.999618   17524 status.go:343] host is not running, skipping remaining checks
	I0729 04:26:18.999621   17524 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (33.252167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (55.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-301000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-301000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-301000: (2.970529291s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-301000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-301000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.224961042s)

                                                
                                                
-- stdout --
	* [multinode-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	* Restarting existing qemu2 VM for "multinode-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:22.098259   17550 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:22.098411   17550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:22.098416   17550 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:22.098419   17550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:22.098609   17550 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:26:22.099874   17550 out.go:298] Setting JSON to false
	I0729 04:26:22.119421   17550 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8751,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:26:22.119501   17550 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:26:22.123878   17550 out.go:177] * [multinode-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:26:22.131820   17550 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:26:22.131877   17550 notify.go:220] Checking for updates...
	I0729 04:26:22.139657   17550 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:26:22.142843   17550 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:26:22.145816   17550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:26:22.148829   17550 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:26:22.151847   17550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:26:22.155059   17550 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:22.155116   17550 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:26:22.158763   17550 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:26:22.165827   17550 start.go:297] selected driver: qemu2
	I0729 04:26:22.165835   17550 start.go:901] validating driver "qemu2" against &{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:22.165905   17550 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:26:22.168403   17550 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:26:22.168455   17550 cni.go:84] Creating CNI manager for ""
	I0729 04:26:22.168460   17550 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:26:22.168513   17550 start.go:340] cluster config:
	{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-301000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:22.172573   17550 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:22.180781   17550 out.go:177] * Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	I0729 04:26:22.184900   17550 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:26:22.184917   17550 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:26:22.184928   17550 cache.go:56] Caching tarball of preloaded images
	I0729 04:26:22.185007   17550 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:26:22.185012   17550 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:26:22.185066   17550 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/multinode-301000/config.json ...
	I0729 04:26:22.185493   17550 start.go:360] acquireMachinesLock for multinode-301000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:22.185528   17550 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "multinode-301000"
	I0729 04:26:22.185542   17550 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:22.185547   17550 fix.go:54] fixHost starting: 
	I0729 04:26:22.185663   17550 fix.go:112] recreateIfNeeded on multinode-301000: state=Stopped err=<nil>
	W0729 04:26:22.185671   17550 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:22.193786   17550 out.go:177] * Restarting existing qemu2 VM for "multinode-301000" ...
	I0729 04:26:22.197759   17550 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:22.197796   17550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:47:fd:4e:17:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2
	I0729 04:26:22.199991   17550 main.go:141] libmachine: STDOUT: 
	I0729 04:26:22.200013   17550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:22.200042   17550 fix.go:56] duration metric: took 14.49625ms for fixHost
	I0729 04:26:22.200046   17550 start.go:83] releasing machines lock for "multinode-301000", held for 14.510583ms
	W0729 04:26:22.200054   17550 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:22.200092   17550 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:22.200096   17550 start.go:729] Will try again in 5 seconds ...
	I0729 04:26:27.202164   17550 start.go:360] acquireMachinesLock for multinode-301000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:27.202634   17550 start.go:364] duration metric: took 313.75µs to acquireMachinesLock for "multinode-301000"
	I0729 04:26:27.202777   17550 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:27.202800   17550 fix.go:54] fixHost starting: 
	I0729 04:26:27.203473   17550 fix.go:112] recreateIfNeeded on multinode-301000: state=Stopped err=<nil>
	W0729 04:26:27.203503   17550 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:27.210926   17550 out.go:177] * Restarting existing qemu2 VM for "multinode-301000" ...
	I0729 04:26:27.214942   17550 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:27.215256   17550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:47:fd:4e:17:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2
	I0729 04:26:27.224390   17550 main.go:141] libmachine: STDOUT: 
	I0729 04:26:27.224451   17550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:27.224533   17550 fix.go:56] duration metric: took 21.73575ms for fixHost
	I0729 04:26:27.224550   17550 start.go:83] releasing machines lock for "multinode-301000", held for 21.895ms
	W0729 04:26:27.224713   17550 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:27.230933   17550 out.go:177] 
	W0729 04:26:27.234786   17550 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:27.234812   17550 out.go:239] * 
	* 
	W0729 04:26:27.237362   17550 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:26:27.246942   17550 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-301000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-301000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (32.05375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 node delete m03: exit status 83 (41.743334ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-301000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-301000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-301000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr: exit status 7 (30.424292ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:27.434279   17568 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:27.434415   17568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:27.434418   17568 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:27.434421   17568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:27.434564   17568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:26:27.434677   17568 out.go:298] Setting JSON to false
	I0729 04:26:27.434686   17568 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:26:27.434736   17568 notify.go:220] Checking for updates...
	I0729 04:26:27.434886   17568 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:27.434893   17568 status.go:255] checking status of multinode-301000 ...
	I0729 04:26:27.435106   17568 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:26:27.435110   17568 status.go:343] host is not running, skipping remaining checks
	I0729 04:26:27.435113   17568 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (30.162208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-301000 stop: (3.498493625s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status: exit status 7 (66.282208ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr: exit status 7 (32.641916ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:31.066710   17594 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:31.066835   17594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:31.066839   17594 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:31.066841   17594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:31.066984   17594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:26:31.067111   17594 out.go:298] Setting JSON to false
	I0729 04:26:31.067121   17594 mustload.go:65] Loading cluster: multinode-301000
	I0729 04:26:31.067170   17594 notify.go:220] Checking for updates...
	I0729 04:26:31.067354   17594 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:31.067360   17594 status.go:255] checking status of multinode-301000 ...
	I0729 04:26:31.067556   17594 status.go:330] multinode-301000 host status = "Stopped" (err=<nil>)
	I0729 04:26:31.067560   17594 status.go:343] host is not running, skipping remaining checks
	I0729 04:26:31.067562   17594 status.go:257] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr": multinode-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr": multinode-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.627375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-301000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-301000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.179666333s)

                                                
                                                
-- stdout --
	* [multinode-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	* Restarting existing qemu2 VM for "multinode-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:31.126125   17598 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:31.126254   17598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:31.126257   17598 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:31.126259   17598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:31.126397   17598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:26:31.127428   17598 out.go:298] Setting JSON to false
	I0729 04:26:31.143598   17598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8760,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:26:31.143666   17598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:26:31.147817   17598 out.go:177] * [multinode-301000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:26:31.154611   17598 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:26:31.154640   17598 notify.go:220] Checking for updates...
	I0729 04:26:31.162468   17598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:26:31.165549   17598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:26:31.168636   17598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:26:31.171553   17598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:26:31.174595   17598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:26:31.177943   17598 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:31.178197   17598 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:26:31.182538   17598 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:26:31.189582   17598 start.go:297] selected driver: qemu2
	I0729 04:26:31.189588   17598 start.go:901] validating driver "qemu2" against &{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:31.189634   17598 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:26:31.191905   17598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:26:31.191926   17598 cni.go:84] Creating CNI manager for ""
	I0729 04:26:31.191931   17598 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:26:31.191966   17598 start.go:340] cluster config:
	{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-301000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:31.195672   17598 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:31.203578   17598 out.go:177] * Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	I0729 04:26:31.207580   17598 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:26:31.207597   17598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:26:31.207610   17598 cache.go:56] Caching tarball of preloaded images
	I0729 04:26:31.207666   17598 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:26:31.207673   17598 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:26:31.207740   17598 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/multinode-301000/config.json ...
	I0729 04:26:31.208176   17598 start.go:360] acquireMachinesLock for multinode-301000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:31.208205   17598 start.go:364] duration metric: took 23.75µs to acquireMachinesLock for "multinode-301000"
	I0729 04:26:31.208215   17598 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:31.208221   17598 fix.go:54] fixHost starting: 
	I0729 04:26:31.208338   17598 fix.go:112] recreateIfNeeded on multinode-301000: state=Stopped err=<nil>
	W0729 04:26:31.208347   17598 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:31.216592   17598 out.go:177] * Restarting existing qemu2 VM for "multinode-301000" ...
	I0729 04:26:31.220562   17598 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:31.220598   17598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:47:fd:4e:17:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2
	I0729 04:26:31.222687   17598 main.go:141] libmachine: STDOUT: 
	I0729 04:26:31.222708   17598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:31.222738   17598 fix.go:56] duration metric: took 14.516375ms for fixHost
	I0729 04:26:31.222743   17598 start.go:83] releasing machines lock for "multinode-301000", held for 14.53425ms
	W0729 04:26:31.222750   17598 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:31.222792   17598 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:31.222797   17598 start.go:729] Will try again in 5 seconds ...
	I0729 04:26:36.224905   17598 start.go:360] acquireMachinesLock for multinode-301000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:36.225337   17598 start.go:364] duration metric: took 314.833µs to acquireMachinesLock for "multinode-301000"
	I0729 04:26:36.225450   17598 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:26:36.225470   17598 fix.go:54] fixHost starting: 
	I0729 04:26:36.226135   17598 fix.go:112] recreateIfNeeded on multinode-301000: state=Stopped err=<nil>
	W0729 04:26:36.226162   17598 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:26:36.230805   17598 out.go:177] * Restarting existing qemu2 VM for "multinode-301000" ...
	I0729 04:26:36.235735   17598 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:36.235936   17598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:47:fd:4e:17:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/multinode-301000/disk.qcow2
	I0729 04:26:36.241935   17598 main.go:141] libmachine: STDOUT: 
	I0729 04:26:36.241998   17598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:36.242084   17598 fix.go:56] duration metric: took 16.616125ms for fixHost
	I0729 04:26:36.242102   17598 start.go:83] releasing machines lock for "multinode-301000", held for 16.745292ms
	W0729 04:26:36.242283   17598 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:26:36.249725   17598 out.go:177] 
	W0729 04:26:36.253689   17598 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:26:36.253717   17598 out.go:239] * 
	* 
	W0729 04:26:36.256135   17598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:26:36.264633   17598 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-301000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (69.574458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-301000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-301000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-301000-m01 --driver=qemu2 : exit status 80 (11.266252584s)

                                                
                                                
-- stdout --
	* [multinode-301000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-301000-m01" primary control-plane node in "multinode-301000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-301000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-301000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-301000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-301000-m02 --driver=qemu2 : exit status 80 (10.801413875s)

                                                
                                                
-- stdout --
	* [multinode-301000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-301000-m02" primary control-plane node in "multinode-301000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-301000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-301000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-301000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-301000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-301000: exit status 83 (78.89475ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-301000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-301000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-301000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.872666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (22.29s)

                                                
                                    
x
+
TestPreload (10.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-274000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-274000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.963523917s)

                                                
                                                
-- stdout --
	* [test-preload-274000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-274000" primary control-plane node in "test-preload-274000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-274000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:26:58.782512   17685 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:26:58.782649   17685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:58.782652   17685 out.go:304] Setting ErrFile to fd 2...
	I0729 04:26:58.782654   17685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:26:58.782781   17685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:26:58.783867   17685 out.go:298] Setting JSON to false
	I0729 04:26:58.800042   17685 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8787,"bootTime":1722243631,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:26:58.800170   17685 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:26:58.805369   17685 out.go:177] * [test-preload-274000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:26:58.812246   17685 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:26:58.812288   17685 notify.go:220] Checking for updates...
	I0729 04:26:58.819236   17685 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:26:58.822231   17685 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:26:58.826225   17685 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:26:58.829187   17685 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:26:58.832316   17685 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:26:58.835542   17685 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:26:58.835587   17685 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:26:58.840249   17685 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:26:58.847181   17685 start.go:297] selected driver: qemu2
	I0729 04:26:58.847187   17685 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:26:58.847194   17685 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:26:58.849539   17685 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:26:58.854164   17685 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:26:58.857355   17685 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:26:58.857419   17685 cni.go:84] Creating CNI manager for ""
	I0729 04:26:58.857427   17685 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:26:58.857438   17685 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:26:58.857475   17685 start.go:340] cluster config:
	{Name:test-preload-274000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-274000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:26:58.861251   17685 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:58.870260   17685 out.go:177] * Starting "test-preload-274000" primary control-plane node in "test-preload-274000" cluster
	I0729 04:26:58.874193   17685 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0729 04:26:58.874282   17685 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/test-preload-274000/config.json ...
	I0729 04:26:58.874302   17685 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/test-preload-274000/config.json: {Name:mk92ad36b202bf13831f79800ed6b1e10a4c5cab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:26:58.874349   17685 cache.go:107] acquiring lock: {Name:mk899f9a594768a2184e26b206c707132da4274d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:58.874354   17685 cache.go:107] acquiring lock: {Name:mk29f974dacab5860b93ee5c2a44e6e108c44727 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:58.874378   17685 cache.go:107] acquiring lock: {Name:mk8625214f7eb9f42f5368d5711a80c3746e303e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:58.874461   17685 cache.go:107] acquiring lock: {Name:mk5f01181152d3a1a03594ca750f9d4362fb861b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:58.874473   17685 cache.go:107] acquiring lock: {Name:mk8bc4d8ffa9c1c8ca7d93dbace8e38c756cefd1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:58.874512   17685 cache.go:107] acquiring lock: {Name:mk970bd78ec6c3faf152c17df6d1b6fdd141f523 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:58.874474   17685 cache.go:107] acquiring lock: {Name:mk21c75fccd8b31379f7ce1e8014666193bd35d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:58.874448   17685 cache.go:107] acquiring lock: {Name:mk9cb17d38c3d13d88775cf8514275a976b9727f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:26:58.874655   17685 start.go:360] acquireMachinesLock for test-preload-274000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:26:58.874759   17685 start.go:364] duration metric: took 89.709µs to acquireMachinesLock for "test-preload-274000"
	I0729 04:26:58.874810   17685 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 04:26:58.874825   17685 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 04:26:58.874829   17685 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 04:26:58.874772   17685 start.go:93] Provisioning new machine with config: &{Name:test-preload-274000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-274000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:26:58.874845   17685 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:26:58.874812   17685 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 04:26:58.874861   17685 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:26:58.874872   17685 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 04:26:58.874979   17685 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:26:58.875435   17685 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:26:58.880137   17685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:26:58.886293   17685 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 04:26:58.886338   17685 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:26:58.886368   17685 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 04:26:58.886385   17685 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 04:26:58.886397   17685 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:26:58.886304   17685 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 04:26:58.886955   17685 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 04:26:58.887841   17685 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:26:58.898261   17685 start.go:159] libmachine.API.Create for "test-preload-274000" (driver="qemu2")
	I0729 04:26:58.898278   17685 client.go:168] LocalClient.Create starting
	I0729 04:26:58.898339   17685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:26:58.898376   17685 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:58.898386   17685 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:58.898428   17685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:26:58.898458   17685 main.go:141] libmachine: Decoding PEM data...
	I0729 04:26:58.898468   17685 main.go:141] libmachine: Parsing certificate...
	I0729 04:26:58.898841   17685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:26:59.047082   17685 main.go:141] libmachine: Creating SSH key...
	I0729 04:26:59.125035   17685 main.go:141] libmachine: Creating Disk image...
	I0729 04:26:59.125101   17685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:26:59.125400   17685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2
	I0729 04:26:59.135187   17685 main.go:141] libmachine: STDOUT: 
	I0729 04:26:59.135231   17685 main.go:141] libmachine: STDERR: 
	I0729 04:26:59.135290   17685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2 +20000M
	I0729 04:26:59.144389   17685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:26:59.144407   17685 main.go:141] libmachine: STDERR: 
	I0729 04:26:59.144523   17685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2
	I0729 04:26:59.144529   17685 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:26:59.144595   17685 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:26:59.144710   17685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:6e:98:e8:d9:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2
	I0729 04:26:59.146804   17685 main.go:141] libmachine: STDOUT: 
	I0729 04:26:59.146823   17685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:26:59.146839   17685 client.go:171] duration metric: took 248.563083ms to LocalClient.Create
	I0729 04:26:59.280225   17685 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 04:26:59.282974   17685 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0729 04:26:59.286869   17685 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 04:26:59.304389   17685 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 04:26:59.333136   17685 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 04:26:59.380115   17685 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0729 04:26:59.399570   17685 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 04:26:59.399607   17685 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 04:26:59.415661   17685 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0729 04:26:59.415685   17685 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 541.262375ms
	I0729 04:26:59.415703   17685 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0729 04:26:59.809780   17685 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 04:26:59.809861   17685 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 04:27:00.018387   17685 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 04:27:00.018430   17685 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.144107042s
	I0729 04:27:00.018457   17685 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 04:27:01.147105   17685 start.go:128] duration metric: took 2.272282375s to createHost
	I0729 04:27:01.147174   17685 start.go:83] releasing machines lock for "test-preload-274000", held for 2.272457709s
	W0729 04:27:01.147217   17685 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:01.164173   17685 out.go:177] * Deleting "test-preload-274000" in qemu2 ...
	W0729 04:27:01.193867   17685 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:01.193910   17685 start.go:729] Will try again in 5 seconds ...
	I0729 04:27:01.491319   17685 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0729 04:27:01.491367   17685 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.61695225s
	I0729 04:27:01.491412   17685 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0729 04:27:02.212419   17685 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0729 04:27:02.212488   17685 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.338126167s
	I0729 04:27:02.212511   17685 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0729 04:27:03.931397   17685 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0729 04:27:03.931452   17685 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.057250958s
	I0729 04:27:03.931479   17685 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0729 04:27:04.409485   17685 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0729 04:27:04.409541   17685 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.535240041s
	I0729 04:27:04.409594   17685 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0729 04:27:04.481334   17685 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0729 04:27:04.481373   17685 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.607156042s
	I0729 04:27:04.481427   17685 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0729 04:27:06.194067   17685 start.go:360] acquireMachinesLock for test-preload-274000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:27:06.194517   17685 start.go:364] duration metric: took 365.291µs to acquireMachinesLock for "test-preload-274000"
	I0729 04:27:06.194649   17685 start.go:93] Provisioning new machine with config: &{Name:test-preload-274000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-274000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:27:06.194935   17685 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:27:06.203483   17685 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:27:06.253627   17685 start.go:159] libmachine.API.Create for "test-preload-274000" (driver="qemu2")
	I0729 04:27:06.253702   17685 client.go:168] LocalClient.Create starting
	I0729 04:27:06.253856   17685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:27:06.253932   17685 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:06.253949   17685 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:06.254041   17685 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:27:06.254090   17685 main.go:141] libmachine: Decoding PEM data...
	I0729 04:27:06.254101   17685 main.go:141] libmachine: Parsing certificate...
	I0729 04:27:06.254649   17685 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:27:06.461667   17685 main.go:141] libmachine: Creating SSH key...
	I0729 04:27:06.645764   17685 main.go:141] libmachine: Creating Disk image...
	I0729 04:27:06.645780   17685 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:27:06.645999   17685 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2
	I0729 04:27:06.655918   17685 main.go:141] libmachine: STDOUT: 
	I0729 04:27:06.655945   17685 main.go:141] libmachine: STDERR: 
	I0729 04:27:06.655996   17685 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2 +20000M
	I0729 04:27:06.664117   17685 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:27:06.664155   17685 main.go:141] libmachine: STDERR: 
	I0729 04:27:06.664172   17685 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2
	I0729 04:27:06.664176   17685 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:27:06.664185   17685 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:27:06.664226   17685 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:c4:f3:17:37:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/test-preload-274000/disk.qcow2
	I0729 04:27:06.665927   17685 main.go:141] libmachine: STDOUT: 
	I0729 04:27:06.665949   17685 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:27:06.665969   17685 client.go:171] duration metric: took 412.258625ms to LocalClient.Create
	I0729 04:27:08.666501   17685 start.go:128] duration metric: took 2.471583167s to createHost
	I0729 04:27:08.666564   17685 start.go:83] releasing machines lock for "test-preload-274000", held for 2.472080333s
	W0729 04:27:08.666761   17685 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-274000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-274000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:27:08.681179   17685 out.go:177] 
	W0729 04:27:08.685272   17685 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:27:08.685306   17685 out.go:239] * 
	* 
	W0729 04:27:08.687811   17685 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:27:08.699192   17685 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-274000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-29 04:27:08.719549 -0700 PDT m=+660.027530585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-274000 -n test-preload-274000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-274000 -n test-preload-274000: exit status 7 (65.572375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-274000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-274000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-274000
--- FAIL: TestPreload (10.11s)

                                                
                                    
x
+
TestScheduledStopUnix (10.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-152000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-152000 --memory=2048 --driver=qemu2 : exit status 80 (9.898816292s)

                                                
                                                
-- stdout --
	* [scheduled-stop-152000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-152000" primary control-plane node in "scheduled-stop-152000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-152000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-152000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-152000" primary control-plane node in "scheduled-stop-152000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-152000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-29 04:27:18.760446 -0700 PDT m=+670.068674543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-152000 -n scheduled-stop-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-152000 -n scheduled-stop-152000: exit status 7 (71.635583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-152000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-152000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-152000
--- FAIL: TestScheduledStopUnix (10.05s)

                                                
                                    
x
+
TestSkaffold (12.3s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe408186861 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe408186861 version: (1.064049333s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-126000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-126000 --memory=2600 --driver=qemu2 : exit status 80 (10.044852208s)

                                                
                                                
-- stdout --
	* [skaffold-126000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-126000" primary control-plane node in "skaffold-126000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-126000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-126000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-126000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-126000" primary control-plane node in "skaffold-126000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-126000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-126000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-29 04:27:31.066781 -0700 PDT m=+682.375310960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-126000 -n skaffold-126000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-126000 -n skaffold-126000: exit status 7 (64.234208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-126000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-126000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-126000
--- FAIL: TestSkaffold (12.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (587.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3474072201 start -p running-upgrade-317000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3474072201 start -p running-upgrade-317000 --memory=2200 --vm-driver=qemu2 : (50.212168959s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-317000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-317000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.370168959s)

                                                
                                                
-- stdout --
	* [running-upgrade-317000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-317000" primary control-plane node in "running-upgrade-317000" cluster
	* Updating the running qemu2 "running-upgrade-317000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:29:07.367705   18178 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:29:07.367895   18178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:29:07.367898   18178 out.go:304] Setting ErrFile to fd 2...
	I0729 04:29:07.367901   18178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:29:07.368042   18178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:29:07.369050   18178 out.go:298] Setting JSON to false
	I0729 04:29:07.385821   18178 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8916,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:29:07.385916   18178 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:29:07.389989   18178 out.go:177] * [running-upgrade-317000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:29:07.399064   18178 notify.go:220] Checking for updates...
	I0729 04:29:07.402164   18178 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:29:07.406165   18178 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:29:07.413155   18178 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:29:07.416168   18178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:29:07.419169   18178 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:29:07.422174   18178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:29:07.425358   18178 config.go:182] Loaded profile config "running-upgrade-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:29:07.430135   18178 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 04:29:07.433169   18178 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:29:07.437163   18178 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:29:07.445077   18178 start.go:297] selected driver: qemu2
	I0729 04:29:07.445083   18178 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53139 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:29:07.445130   18178 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:29:07.447406   18178 cni.go:84] Creating CNI manager for ""
	I0729 04:29:07.447424   18178 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:29:07.447445   18178 start.go:340] cluster config:
	{Name:running-upgrade-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53139 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:29:07.447492   18178 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:29:07.455054   18178 out.go:177] * Starting "running-upgrade-317000" primary control-plane node in "running-upgrade-317000" cluster
	I0729 04:29:07.459150   18178 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:29:07.459166   18178 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 04:29:07.459177   18178 cache.go:56] Caching tarball of preloaded images
	I0729 04:29:07.459233   18178 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:29:07.459238   18178 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 04:29:07.459299   18178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/config.json ...
	I0729 04:29:07.459756   18178 start.go:360] acquireMachinesLock for running-upgrade-317000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:29:07.459788   18178 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "running-upgrade-317000"
	I0729 04:29:07.459802   18178 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:29:07.459806   18178 fix.go:54] fixHost starting: 
	I0729 04:29:07.460351   18178 fix.go:112] recreateIfNeeded on running-upgrade-317000: state=Running err=<nil>
	W0729 04:29:07.460358   18178 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:29:07.469112   18178 out.go:177] * Updating the running qemu2 "running-upgrade-317000" VM ...
	I0729 04:29:07.473160   18178 machine.go:94] provisionDockerMachine start ...
	I0729 04:29:07.473210   18178 main.go:141] libmachine: Using SSH client type: native
	I0729 04:29:07.473344   18178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100936a10] 0x100939270 <nil>  [] 0s} localhost 53107 <nil> <nil>}
	I0729 04:29:07.473351   18178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 04:29:07.526579   18178 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-317000
	
	I0729 04:29:07.526596   18178 buildroot.go:166] provisioning hostname "running-upgrade-317000"
	I0729 04:29:07.526657   18178 main.go:141] libmachine: Using SSH client type: native
	I0729 04:29:07.526827   18178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100936a10] 0x100939270 <nil>  [] 0s} localhost 53107 <nil> <nil>}
	I0729 04:29:07.526833   18178 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-317000 && echo "running-upgrade-317000" | sudo tee /etc/hostname
	I0729 04:29:07.585824   18178 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-317000
	
	I0729 04:29:07.585878   18178 main.go:141] libmachine: Using SSH client type: native
	I0729 04:29:07.586001   18178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100936a10] 0x100939270 <nil>  [] 0s} localhost 53107 <nil> <nil>}
	I0729 04:29:07.586009   18178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-317000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-317000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-317000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 04:29:07.638589   18178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 04:29:07.638600   18178 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19341-15486/.minikube CaCertPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19341-15486/.minikube}
	I0729 04:29:07.638609   18178 buildroot.go:174] setting up certificates
	I0729 04:29:07.638613   18178 provision.go:84] configureAuth start
	I0729 04:29:07.638619   18178 provision.go:143] copyHostCerts
	I0729 04:29:07.638686   18178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.pem, removing ...
	I0729 04:29:07.638691   18178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.pem
	I0729 04:29:07.638820   18178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.pem (1078 bytes)
	I0729 04:29:07.639011   18178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19341-15486/.minikube/cert.pem, removing ...
	I0729 04:29:07.639014   18178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19341-15486/.minikube/cert.pem
	I0729 04:29:07.639063   18178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19341-15486/.minikube/cert.pem (1123 bytes)
	I0729 04:29:07.639171   18178 exec_runner.go:144] found /Users/jenkins/minikube-integration/19341-15486/.minikube/key.pem, removing ...
	I0729 04:29:07.639174   18178 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19341-15486/.minikube/key.pem
	I0729 04:29:07.639216   18178 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19341-15486/.minikube/key.pem (1675 bytes)
	I0729 04:29:07.639312   18178 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-317000 san=[127.0.0.1 localhost minikube running-upgrade-317000]
	I0729 04:29:07.947133   18178 provision.go:177] copyRemoteCerts
	I0729 04:29:07.947187   18178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 04:29:07.947203   18178 sshutil.go:53] new ssh client: &{IP:localhost Port:53107 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/running-upgrade-317000/id_rsa Username:docker}
	I0729 04:29:07.976627   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 04:29:07.985405   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 04:29:07.992131   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 04:29:08.001768   18178 provision.go:87] duration metric: took 363.156917ms to configureAuth
	I0729 04:29:08.001781   18178 buildroot.go:189] setting minikube options for container-runtime
	I0729 04:29:08.001920   18178 config.go:182] Loaded profile config "running-upgrade-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:29:08.001955   18178 main.go:141] libmachine: Using SSH client type: native
	I0729 04:29:08.002043   18178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100936a10] 0x100939270 <nil>  [] 0s} localhost 53107 <nil> <nil>}
	I0729 04:29:08.002048   18178 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 04:29:08.058954   18178 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 04:29:08.058966   18178 buildroot.go:70] root file system type: tmpfs
	I0729 04:29:08.059024   18178 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 04:29:08.059075   18178 main.go:141] libmachine: Using SSH client type: native
	I0729 04:29:08.059198   18178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100936a10] 0x100939270 <nil>  [] 0s} localhost 53107 <nil> <nil>}
	I0729 04:29:08.059231   18178 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 04:29:08.117759   18178 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 04:29:08.117851   18178 main.go:141] libmachine: Using SSH client type: native
	I0729 04:29:08.117976   18178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100936a10] 0x100939270 <nil>  [] 0s} localhost 53107 <nil> <nil>}
	I0729 04:29:08.117987   18178 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 04:29:08.170885   18178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 04:29:08.170898   18178 machine.go:97] duration metric: took 697.749333ms to provisionDockerMachine
	I0729 04:29:08.170904   18178 start.go:293] postStartSetup for "running-upgrade-317000" (driver="qemu2")
	I0729 04:29:08.170911   18178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 04:29:08.170967   18178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 04:29:08.170975   18178 sshutil.go:53] new ssh client: &{IP:localhost Port:53107 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/running-upgrade-317000/id_rsa Username:docker}
	I0729 04:29:08.198878   18178 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 04:29:08.200172   18178 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 04:29:08.200180   18178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19341-15486/.minikube/addons for local assets ...
	I0729 04:29:08.200249   18178 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19341-15486/.minikube/files for local assets ...
	I0729 04:29:08.200337   18178 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0729 04:29:08.200431   18178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 04:29:08.203089   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0729 04:29:08.209605   18178 start.go:296] duration metric: took 38.696875ms for postStartSetup
	I0729 04:29:08.209623   18178 fix.go:56] duration metric: took 749.829458ms for fixHost
	I0729 04:29:08.209656   18178 main.go:141] libmachine: Using SSH client type: native
	I0729 04:29:08.209755   18178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100936a10] 0x100939270 <nil>  [] 0s} localhost 53107 <nil> <nil>}
	I0729 04:29:08.209761   18178 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 04:29:08.264488   18178 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252548.144462514
	
	I0729 04:29:08.264498   18178 fix.go:216] guest clock: 1722252548.144462514
	I0729 04:29:08.264502   18178 fix.go:229] Guest: 2024-07-29 04:29:08.144462514 -0700 PDT Remote: 2024-07-29 04:29:08.209626 -0700 PDT m=+0.861222543 (delta=-65.163486ms)
	I0729 04:29:08.264513   18178 fix.go:200] guest clock delta is within tolerance: -65.163486ms
	I0729 04:29:08.264516   18178 start.go:83] releasing machines lock for "running-upgrade-317000", held for 804.743917ms
	I0729 04:29:08.264576   18178 ssh_runner.go:195] Run: cat /version.json
	I0729 04:29:08.264588   18178 sshutil.go:53] new ssh client: &{IP:localhost Port:53107 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/running-upgrade-317000/id_rsa Username:docker}
	I0729 04:29:08.264576   18178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 04:29:08.264614   18178 sshutil.go:53] new ssh client: &{IP:localhost Port:53107 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/running-upgrade-317000/id_rsa Username:docker}
	W0729 04:29:08.265134   18178 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53107: connect: connection refused
	I0729 04:29:08.265160   18178 retry.go:31] will retry after 165.421864ms: dial tcp [::1]:53107: connect: connection refused
	W0729 04:29:08.506846   18178 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 04:29:08.506934   18178 ssh_runner.go:195] Run: systemctl --version
	I0729 04:29:08.509012   18178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 04:29:08.511407   18178 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 04:29:08.511441   18178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 04:29:08.514475   18178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 04:29:08.518766   18178 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 04:29:08.518774   18178 start.go:495] detecting cgroup driver to use...
	I0729 04:29:08.518888   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:29:08.524203   18178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 04:29:08.527251   18178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 04:29:08.530071   18178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 04:29:08.530093   18178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 04:29:08.533717   18178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:29:08.537136   18178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 04:29:08.540617   18178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:29:08.543894   18178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 04:29:08.546916   18178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 04:29:08.549782   18178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 04:29:08.553131   18178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 04:29:08.556435   18178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 04:29:08.559097   18178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 04:29:08.561635   18178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:29:08.653381   18178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 04:29:08.665131   18178 start.go:495] detecting cgroup driver to use...
	I0729 04:29:08.665203   18178 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 04:29:08.670041   18178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:29:08.674906   18178 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 04:29:08.684653   18178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:29:08.689593   18178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:29:08.694133   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:29:08.699428   18178 ssh_runner.go:195] Run: which cri-dockerd
	I0729 04:29:08.700715   18178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 04:29:08.703582   18178 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 04:29:08.708475   18178 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 04:29:08.801270   18178 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 04:29:08.896492   18178 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 04:29:08.896553   18178 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 04:29:08.902024   18178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:29:08.996032   18178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:29:12.396449   18178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.400482541s)
	I0729 04:29:12.396517   18178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 04:29:12.401432   18178 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0729 04:29:12.408675   18178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:29:12.413166   18178 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 04:29:12.493460   18178 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 04:29:12.566253   18178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:29:12.656407   18178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 04:29:12.662355   18178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:29:12.667061   18178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:29:12.746571   18178 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 04:29:12.784574   18178 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 04:29:12.784642   18178 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 04:29:12.787663   18178 start.go:563] Will wait 60s for crictl version
	I0729 04:29:12.787710   18178 ssh_runner.go:195] Run: which crictl
	I0729 04:29:12.789041   18178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 04:29:12.800908   18178 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 04:29:12.800969   18178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:29:12.813287   18178 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:29:12.830741   18178 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 04:29:12.830830   18178 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 04:29:12.832112   18178 kubeadm.go:883] updating cluster {Name:running-upgrade-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53139 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 04:29:12.832154   18178 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:29:12.832192   18178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:29:12.842169   18178 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:29:12.842178   18178 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:29:12.842227   18178 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:29:12.845527   18178 ssh_runner.go:195] Run: which lz4
	I0729 04:29:12.846748   18178 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 04:29:12.847888   18178 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 04:29:12.847896   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 04:29:13.786152   18178 docker.go:649] duration metric: took 939.45675ms to copy over tarball
	I0729 04:29:13.786208   18178 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 04:29:14.925170   18178 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.138976708s)
	I0729 04:29:14.925184   18178 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 04:29:14.941322   18178 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:29:14.944863   18178 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 04:29:14.949931   18178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:29:15.031703   18178 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:29:16.232697   18178 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.201007s)
	I0729 04:29:16.232788   18178 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:29:16.248674   18178 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:29:16.248687   18178 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:29:16.248693   18178 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 04:29:16.252866   18178 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:29:16.254456   18178 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:29:16.256339   18178 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:29:16.256379   18178 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:29:16.258542   18178 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:29:16.258782   18178 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:29:16.259935   18178 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:29:16.260072   18178 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:29:16.261528   18178 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:29:16.261591   18178 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:29:16.262805   18178 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 04:29:16.262888   18178 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:29:16.263950   18178 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:29:16.264577   18178 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:29:16.265408   18178 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 04:29:16.266265   18178 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:29:16.654118   18178 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:29:16.672012   18178 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 04:29:16.672053   18178 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:29:16.672107   18178 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:29:16.682905   18178 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 04:29:16.687119   18178 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:29:16.688518   18178 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:29:16.689594   18178 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:29:16.702626   18178 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 04:29:16.702648   18178 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:29:16.702702   18178 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:29:16.704672   18178 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 04:29:16.704686   18178 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:29:16.704716   18178 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:29:16.711169   18178 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 04:29:16.711192   18178 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:29:16.711252   18178 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:29:16.715363   18178 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 04:29:16.717474   18178 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 04:29:16.719741   18178 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 04:29:16.719747   18178 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 04:29:16.736572   18178 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 04:29:16.736596   18178 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 04:29:16.736615   18178 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:29:16.736655   18178 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 04:29:16.736658   18178 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 04:29:16.736665   18178 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 04:29:16.736686   18178 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0729 04:29:16.743911   18178 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 04:29:16.744043   18178 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:29:16.749670   18178 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 04:29:16.749672   18178 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0729 04:29:16.749793   18178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 04:29:16.757839   18178 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 04:29:16.757866   18178 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:29:16.757870   18178 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 04:29:16.757892   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 04:29:16.757913   18178 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:29:16.769195   18178 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 04:29:16.769306   18178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:29:16.770844   18178 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 04:29:16.770854   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 04:29:16.790119   18178 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 04:29:16.790140   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 04:29:16.841005   18178 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 04:29:16.841045   18178 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:29:16.841051   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0729 04:29:16.872976   18178 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 04:29:16.873078   18178 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:29:16.879878   18178 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 04:29:16.884731   18178 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 04:29:16.884753   18178 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:29:16.884810   18178 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:29:18.383299   18178 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.498504083s)
	I0729 04:29:18.383323   18178 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 04:29:18.383561   18178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:29:18.388702   18178 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 04:29:18.388756   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 04:29:18.443808   18178 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:29:18.443822   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 04:29:18.677100   18178 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 04:29:18.677143   18178 cache_images.go:92] duration metric: took 2.428502583s to LoadCachedImages
	W0729 04:29:18.677187   18178 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0729 04:29:18.677194   18178 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 04:29:18.677255   18178 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-317000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 04:29:18.677311   18178 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 04:29:18.690203   18178 cni.go:84] Creating CNI manager for ""
	I0729 04:29:18.690216   18178 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:29:18.690221   18178 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 04:29:18.690233   18178 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-317000 NodeName:running-upgrade-317000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 04:29:18.690303   18178 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-317000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 04:29:18.690351   18178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 04:29:18.693374   18178 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 04:29:18.693402   18178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 04:29:18.696497   18178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 04:29:18.701877   18178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 04:29:18.706898   18178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 04:29:18.712078   18178 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 04:29:18.713534   18178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:29:18.784866   18178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:29:18.789979   18178 certs.go:68] Setting up /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000 for IP: 10.0.2.15
	I0729 04:29:18.789988   18178 certs.go:194] generating shared ca certs ...
	I0729 04:29:18.789999   18178 certs.go:226] acquiring lock for ca certs: {Name:mkdf1894d8f9d5e3cc3aa4d0030f6ecce44e63f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:29:18.790262   18178 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.key
	I0729 04:29:18.790297   18178 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/proxy-client-ca.key
	I0729 04:29:18.790303   18178 certs.go:256] generating profile certs ...
	I0729 04:29:18.790363   18178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/client.key
	I0729 04:29:18.790377   18178 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.key.27b0bdd1
	I0729 04:29:18.790386   18178 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.crt.27b0bdd1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 04:29:18.903945   18178 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.crt.27b0bdd1 ...
	I0729 04:29:18.903956   18178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.crt.27b0bdd1: {Name:mkf1829398b65ca29259398ea83169e1220cd473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:29:18.904180   18178 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.key.27b0bdd1 ...
	I0729 04:29:18.904190   18178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.key.27b0bdd1: {Name:mk530e78346537d2c3b97cceb3e301a505647a63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:29:18.904324   18178 certs.go:381] copying /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.crt.27b0bdd1 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.crt
	I0729 04:29:18.904439   18178 certs.go:385] copying /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.key.27b0bdd1 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.key
	I0729 04:29:18.904563   18178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/proxy-client.key
	I0729 04:29:18.904689   18178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/15973.pem (1338 bytes)
	W0729 04:29:18.904713   18178 certs.go:480] ignoring /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0729 04:29:18.904718   18178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 04:29:18.904745   18178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem (1078 bytes)
	I0729 04:29:18.904763   18178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem (1123 bytes)
	I0729 04:29:18.904781   18178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/key.pem (1675 bytes)
	I0729 04:29:18.904822   18178 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0729 04:29:18.905196   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 04:29:18.913228   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 04:29:18.921064   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 04:29:18.928556   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 04:29:18.935691   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 04:29:18.942857   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 04:29:18.949800   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 04:29:18.956961   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 04:29:18.964090   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 04:29:18.971545   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0729 04:29:18.979158   18178 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0729 04:29:18.986514   18178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 04:29:18.991787   18178 ssh_runner.go:195] Run: openssl version
	I0729 04:29:18.993911   18178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0729 04:29:18.996970   18178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0729 04:29:18.998651   18178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:17 /usr/share/ca-certificates/15973.pem
	I0729 04:29:18.998673   18178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0729 04:29:19.000612   18178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0729 04:29:19.003203   18178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0729 04:29:19.006767   18178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0729 04:29:19.008397   18178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:17 /usr/share/ca-certificates/159732.pem
	I0729 04:29:19.008414   18178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0729 04:29:19.010137   18178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 04:29:19.013158   18178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 04:29:19.016279   18178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:29:19.017929   18178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:29:19.017951   18178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:29:19.020073   18178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 04:29:19.023322   18178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 04:29:19.024924   18178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 04:29:19.026940   18178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 04:29:19.028825   18178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 04:29:19.030723   18178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 04:29:19.032724   18178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 04:29:19.034544   18178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 04:29:19.036464   18178 kubeadm.go:392] StartCluster: {Name:running-upgrade-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53139 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:29:19.036533   18178 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:29:19.046683   18178 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 04:29:19.049826   18178 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 04:29:19.049832   18178 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 04:29:19.049854   18178 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 04:29:19.052700   18178 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:29:19.052740   18178 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-317000" does not appear in /Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:29:19.052757   18178 kubeconfig.go:62] /Users/jenkins/minikube-integration/19341-15486/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-317000" cluster setting kubeconfig missing "running-upgrade-317000" context setting]
	I0729 04:29:19.052933   18178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/kubeconfig: {Name:mk01c5aa9060b104010e51a5796278cdf7a7a206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:29:19.054114   18178 kapi.go:59] client config for running-upgrade-317000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/client.key", CAFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ccc080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:29:19.055248   18178 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 04:29:19.058449   18178 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-317000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 04:29:19.058456   18178 kubeadm.go:1160] stopping kube-system containers ...
	I0729 04:29:19.058507   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:29:19.069686   18178 docker.go:483] Stopping containers: [fca6d810590a 41ece1a9412e 8d522a953404 b25546feb08e 3d1157a1985b eeb9d383fda6 627551587c9d da7fecfce787 5893dafa54f8 1189d5bb0bf0 e78f44f711dc ac2e400eb8df]
	I0729 04:29:19.069757   18178 ssh_runner.go:195] Run: docker stop fca6d810590a 41ece1a9412e 8d522a953404 b25546feb08e 3d1157a1985b eeb9d383fda6 627551587c9d da7fecfce787 5893dafa54f8 1189d5bb0bf0 e78f44f711dc ac2e400eb8df
	I0729 04:29:19.080928   18178 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 04:29:19.178935   18178 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:29:19.182856   18178 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 29 11:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 29 11:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 29 11:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 29 11:28 /etc/kubernetes/scheduler.conf
	
	I0729 04:29:19.182888   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/admin.conf
	I0729 04:29:19.186049   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:29:19.186086   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:29:19.189354   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/kubelet.conf
	I0729 04:29:19.192693   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:29:19.192723   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:29:19.195904   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/controller-manager.conf
	I0729 04:29:19.198939   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:29:19.198958   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:29:19.201652   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/scheduler.conf
	I0729 04:29:19.204341   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:29:19.204365   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:29:19.207409   18178 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:29:19.210537   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:29:19.236937   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:29:19.544203   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:29:19.926475   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:29:19.951155   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:29:19.982039   18178 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:29:19.982119   18178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:29:20.484310   18178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:29:20.984157   18178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:29:20.988886   18178 api_server.go:72] duration metric: took 1.006873208s to wait for apiserver process to appear ...
	I0729 04:29:20.988894   18178 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:29:20.988903   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:29:25.990948   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:29:25.991007   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:29:30.991624   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:29:30.991676   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:29:35.992205   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:29:35.992264   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:29:40.992911   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:29:40.992978   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:29:45.993969   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:29:45.994049   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:29:50.995476   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:29:50.995556   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:29:55.997485   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:29:55.997569   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:30:01.000055   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:30:01.000093   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:30:06.002342   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:30:06.002431   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:30:11.004949   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:30:11.005038   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:30:16.007595   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:30:16.007673   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:30:21.010165   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:30:21.010588   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:30:21.049902   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:30:21.050044   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:30:21.071300   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:30:21.071407   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:30:21.088718   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:30:21.088791   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:30:21.103023   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:30:21.103104   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:30:21.113531   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:30:21.113595   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:30:21.124200   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:30:21.124269   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:30:21.141115   18178 logs.go:276] 0 containers: []
	W0729 04:30:21.141126   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:30:21.141187   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:30:21.151926   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:30:21.151945   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:30:21.151950   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:30:21.166391   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:30:21.166405   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:30:21.183409   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:30:21.183423   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:30:21.198176   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:30:21.198189   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:30:21.265449   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:30:21.265462   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:30:21.279134   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:30:21.279144   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:30:21.297975   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:30:21.297986   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:30:21.309613   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:30:21.309625   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:30:21.333760   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:30:21.333767   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:30:21.337630   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:30:21.337638   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:30:21.351748   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:30:21.351761   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:30:21.378521   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:30:21.378532   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:30:21.393580   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:30:21.393591   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:30:21.415387   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:30:21.415398   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:30:21.427325   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:30:21.427336   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:30:21.462555   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:30:21.462565   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:30:21.480068   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:30:21.480080   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:30:23.991768   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:30:28.993183   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:30:28.993551   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:30:29.022865   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:30:29.023014   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:30:29.041111   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:30:29.041213   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:30:29.054584   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:30:29.054660   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:30:29.066517   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:30:29.066588   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:30:29.077237   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:30:29.077306   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:30:29.092837   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:30:29.092916   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:30:29.102963   18178 logs.go:276] 0 containers: []
	W0729 04:30:29.102977   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:30:29.103044   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:30:29.113651   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:30:29.113669   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:30:29.113675   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:30:29.139713   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:30:29.139723   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:30:29.156442   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:30:29.156452   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:30:29.172214   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:30:29.172226   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:30:29.186993   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:30:29.187005   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:30:29.198607   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:30:29.198619   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:30:29.223284   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:30:29.223295   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:30:29.236793   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:30:29.236804   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:30:29.253816   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:30:29.253829   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:30:29.264928   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:30:29.264941   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:30:29.300043   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:30:29.300051   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:30:29.316530   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:30:29.316544   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:30:29.329558   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:30:29.329571   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:30:29.340879   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:30:29.340888   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:30:29.345200   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:30:29.345206   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:30:29.384341   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:30:29.384353   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:30:29.395328   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:30:29.395340   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:30:31.914536   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:30:36.917216   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:30:36.917591   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:30:36.948810   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:30:36.948933   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:30:36.968049   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:30:36.968134   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:30:36.981908   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:30:36.981970   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:30:36.994873   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:30:36.994942   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:30:37.006149   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:30:37.006223   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:30:37.016898   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:30:37.016963   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:30:37.029587   18178 logs.go:276] 0 containers: []
	W0729 04:30:37.029605   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:30:37.029665   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:30:37.040010   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:30:37.040029   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:30:37.040034   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:30:37.074560   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:30:37.074571   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:30:37.078634   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:30:37.078642   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:30:37.095546   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:30:37.095556   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:30:37.106188   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:30:37.106199   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:30:37.130468   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:30:37.130477   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:30:37.141709   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:30:37.141721   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:30:37.170931   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:30:37.170941   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:30:37.184381   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:30:37.184395   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:30:37.197737   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:30:37.197748   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:30:37.211276   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:30:37.211287   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:30:37.222160   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:30:37.222170   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:30:37.235995   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:30:37.236009   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:30:37.247633   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:30:37.247651   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:30:37.261777   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:30:37.261793   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:30:37.296447   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:30:37.296461   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:30:37.307797   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:30:37.307808   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:30:39.825263   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:30:44.826450   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:30:44.826719   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:30:44.854798   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:30:44.854902   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:30:44.868938   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:30:44.869016   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:30:44.885886   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:30:44.885959   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:30:44.897886   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:30:44.897960   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:30:44.908885   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:30:44.908947   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:30:44.919767   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:30:44.919840   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:30:44.930902   18178 logs.go:276] 0 containers: []
	W0729 04:30:44.930919   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:30:44.931001   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:30:44.944003   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:30:44.944023   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:30:44.944029   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:30:44.957380   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:30:44.957393   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:30:44.974372   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:30:44.974386   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:30:44.999068   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:30:44.999078   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:30:45.035485   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:30:45.035498   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:30:45.062713   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:30:45.062724   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:30:45.099374   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:30:45.099390   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:30:45.115598   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:30:45.115614   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:30:45.128757   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:30:45.128767   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:30:45.144729   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:30:45.144739   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:30:45.156376   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:30:45.156387   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:30:45.170487   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:30:45.170497   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:30:45.182330   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:30:45.182341   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:30:45.201954   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:30:45.201966   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:30:45.206177   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:30:45.206187   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:30:45.219720   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:30:45.219730   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:30:45.231408   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:30:45.231420   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:30:47.747183   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:30:52.749713   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:30:52.750057   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:30:52.784350   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:30:52.784459   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:30:52.801645   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:30:52.801716   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:30:52.814743   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:30:52.814808   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:30:52.827697   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:30:52.827754   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:30:52.838409   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:30:52.838462   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:30:52.849143   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:30:52.849199   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:30:52.859355   18178 logs.go:276] 0 containers: []
	W0729 04:30:52.859370   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:30:52.859414   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:30:52.869804   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:30:52.869821   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:30:52.869826   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:30:52.903961   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:30:52.903971   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:30:52.915404   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:30:52.915420   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:30:52.942169   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:30:52.942184   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:30:52.955090   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:30:52.955099   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:30:52.966133   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:30:52.966144   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:30:52.981908   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:30:52.981922   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:30:52.995488   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:30:52.995498   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:30:53.021177   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:30:53.021188   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:30:53.034894   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:30:53.034908   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:30:53.048784   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:30:53.048799   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:30:53.060129   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:30:53.060141   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:30:53.077627   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:30:53.077641   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:30:53.089311   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:30:53.089326   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:30:53.100844   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:30:53.100857   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:30:53.136060   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:30:53.136068   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:30:53.140259   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:30:53.140266   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:30:55.656302   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:31:00.657910   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:31:00.658322   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:31:00.694727   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:31:00.694860   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:31:00.714649   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:31:00.714759   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:31:00.729297   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:31:00.729374   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:31:00.741326   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:31:00.741388   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:31:00.751844   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:31:00.751911   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:31:00.762741   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:31:00.762809   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:31:00.773459   18178 logs.go:276] 0 containers: []
	W0729 04:31:00.773480   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:31:00.773533   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:31:00.787635   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:31:00.787651   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:31:00.787656   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:31:00.808527   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:31:00.808540   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:31:00.838664   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:31:00.838672   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:31:00.852540   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:31:00.852551   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:31:00.864219   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:31:00.864229   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:31:00.875750   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:31:00.875763   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:31:00.887561   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:31:00.887570   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:31:00.931261   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:31:00.931276   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:31:00.945778   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:31:00.945787   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:31:00.958020   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:31:00.958031   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:31:00.969319   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:31:00.969330   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:31:01.004717   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:31:01.004730   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:31:01.019223   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:31:01.019235   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:31:01.032664   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:31:01.032675   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:31:01.047732   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:31:01.047744   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:31:01.062467   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:31:01.062480   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:31:01.099194   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:31:01.099202   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:31:03.605984   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:31:08.607183   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:31:08.607627   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:31:08.650427   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:31:08.650566   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:31:08.672421   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:31:08.672531   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:31:08.687994   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:31:08.688071   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:31:08.700853   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:31:08.700929   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:31:08.717180   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:31:08.717257   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:31:08.728174   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:31:08.728246   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:31:08.738350   18178 logs.go:276] 0 containers: []
	W0729 04:31:08.738360   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:31:08.738414   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:31:08.749371   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:31:08.749389   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:31:08.749394   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:31:08.784639   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:31:08.784648   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:31:08.818712   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:31:08.818725   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:31:08.832742   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:31:08.832754   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:31:08.848521   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:31:08.848535   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:31:08.860462   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:31:08.860473   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:31:08.877888   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:31:08.877900   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:31:08.891569   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:31:08.891580   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:31:08.902932   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:31:08.902944   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:31:08.928373   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:31:08.928381   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:31:08.940023   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:31:08.940034   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:31:08.944753   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:31:08.944762   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:31:08.959011   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:31:08.959022   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:31:08.970174   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:31:08.970187   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:31:08.981805   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:31:08.981816   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:31:09.006448   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:31:09.006460   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:31:09.019345   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:31:09.019359   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:31:11.538744   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:31:16.540674   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:31:16.541056   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:31:16.569061   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:31:16.569184   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:31:16.587823   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:31:16.587909   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:31:16.601847   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:31:16.601913   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:31:16.613042   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:31:16.613108   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:31:16.623464   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:31:16.623531   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:31:16.635462   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:31:16.635521   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:31:16.645842   18178 logs.go:276] 0 containers: []
	W0729 04:31:16.645856   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:31:16.645912   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:31:16.656100   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:31:16.656123   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:31:16.656128   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:31:16.681347   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:31:16.681355   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:31:16.692908   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:31:16.692921   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:31:16.704544   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:31:16.704558   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:31:16.721517   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:31:16.721529   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:31:16.745370   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:31:16.745384   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:31:16.749521   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:31:16.749532   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:31:16.763524   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:31:16.763536   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:31:16.777511   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:31:16.777522   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:31:16.788476   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:31:16.788486   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:31:16.799268   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:31:16.799282   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:31:16.837942   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:31:16.837957   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:31:16.864448   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:31:16.864462   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:31:16.878376   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:31:16.878385   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:31:16.889432   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:31:16.889443   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:31:16.927130   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:31:16.927140   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:31:16.940540   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:31:16.940549   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:31:19.458204   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:31:24.460457   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:31:24.460887   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:31:24.500470   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:31:24.500596   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:31:24.522057   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:31:24.522177   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:31:24.537145   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:31:24.537226   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:31:24.555076   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:31:24.555147   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:31:24.569138   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:31:24.569196   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:31:24.579468   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:31:24.579528   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:31:24.599211   18178 logs.go:276] 0 containers: []
	W0729 04:31:24.599223   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:31:24.599282   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:31:24.609765   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:31:24.609782   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:31:24.609788   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:31:24.644700   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:31:24.644713   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:31:24.669541   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:31:24.669553   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:31:24.681024   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:31:24.681038   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:31:24.692904   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:31:24.692914   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:31:24.716778   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:31:24.716789   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:31:24.721357   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:31:24.721367   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:31:24.735149   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:31:24.735162   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:31:24.765893   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:31:24.765905   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:31:24.784525   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:31:24.784537   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:31:24.805974   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:31:24.805984   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:31:24.817488   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:31:24.817498   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:31:24.831137   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:31:24.831153   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:31:24.869099   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:31:24.869107   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:31:24.883316   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:31:24.883327   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:31:24.899399   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:31:24.899414   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:31:24.914062   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:31:24.914074   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:31:27.427379   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:31:32.430049   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:31:32.430445   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:31:32.467408   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:31:32.467552   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:31:32.490370   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:31:32.490485   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:31:32.506311   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:31:32.506383   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:31:32.519016   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:31:32.519079   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:31:32.529547   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:31:32.529610   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:31:32.539903   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:31:32.539965   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:31:32.551930   18178 logs.go:276] 0 containers: []
	W0729 04:31:32.551955   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:31:32.552011   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:31:32.562192   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:31:32.562210   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:31:32.562216   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:31:32.585821   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:31:32.585829   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:31:32.621835   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:31:32.621843   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:31:32.635401   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:31:32.635414   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:31:32.652916   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:31:32.652929   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:31:32.664554   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:31:32.664565   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:31:32.678049   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:31:32.678062   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:31:32.691413   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:31:32.691426   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:31:32.702436   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:31:32.702450   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:31:32.717834   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:31:32.717844   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:31:32.731965   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:31:32.731978   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:31:32.743014   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:31:32.743027   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:31:32.778496   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:31:32.778507   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:31:32.806174   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:31:32.806185   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:31:32.817965   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:31:32.817978   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:31:32.822269   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:31:32.822274   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:31:32.835717   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:31:32.835729   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:31:35.349813   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:31:40.352089   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:31:40.352288   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:31:40.374578   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:31:40.374672   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:31:40.390213   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:31:40.390282   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:31:40.402863   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:31:40.402928   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:31:40.413207   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:31:40.413276   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:31:40.423829   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:31:40.423888   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:31:40.438399   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:31:40.438466   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:31:40.448077   18178 logs.go:276] 0 containers: []
	W0729 04:31:40.448093   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:31:40.448148   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:31:40.458153   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:31:40.458176   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:31:40.458182   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:31:40.482909   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:31:40.482918   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:31:40.497495   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:31:40.497507   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:31:40.508779   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:31:40.508791   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:31:40.526100   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:31:40.526111   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:31:40.536860   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:31:40.536873   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:31:40.561098   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:31:40.561110   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:31:40.597726   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:31:40.597735   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:31:40.601782   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:31:40.601788   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:31:40.621647   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:31:40.621658   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:31:40.635536   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:31:40.635549   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:31:40.647349   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:31:40.647363   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:31:40.681954   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:31:40.681966   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:31:40.697881   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:31:40.697894   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:31:40.712324   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:31:40.712338   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:31:40.724162   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:31:40.724174   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:31:40.737524   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:31:40.737535   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:31:43.250496   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:31:48.253061   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:31:48.253229   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:31:48.265618   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:31:48.265690   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:31:48.276996   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:31:48.277085   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:31:48.287718   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:31:48.287783   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:31:48.298276   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:31:48.298347   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:31:48.313474   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:31:48.313543   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:31:48.324197   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:31:48.324264   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:31:48.345442   18178 logs.go:276] 0 containers: []
	W0729 04:31:48.345454   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:31:48.345511   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:31:48.357006   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:31:48.357025   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:31:48.357033   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:31:48.393335   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:31:48.393346   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:31:48.397748   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:31:48.397757   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:31:48.411747   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:31:48.411759   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:31:48.426926   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:31:48.426938   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:31:48.439143   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:31:48.439155   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:31:48.456945   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:31:48.456967   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:31:48.482327   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:31:48.482339   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:31:48.498838   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:31:48.498851   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:31:48.525316   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:31:48.525324   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:31:48.537143   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:31:48.537154   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:31:48.574977   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:31:48.574987   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:31:48.589168   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:31:48.589179   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:31:48.603274   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:31:48.603284   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:31:48.622195   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:31:48.622205   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:31:48.639426   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:31:48.639436   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:31:48.655172   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:31:48.655186   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:31:51.168680   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:31:56.169881   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:31:56.170323   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:31:56.229633   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:31:56.229739   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:31:56.247239   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:31:56.247320   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:31:56.264621   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:31:56.264686   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:31:56.275176   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:31:56.275237   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:31:56.291141   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:31:56.291213   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:31:56.302315   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:31:56.302377   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:31:56.313059   18178 logs.go:276] 0 containers: []
	W0729 04:31:56.313073   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:31:56.313132   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:31:56.323759   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:31:56.323782   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:31:56.323787   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:31:56.335114   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:31:56.335126   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:31:56.346557   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:31:56.346571   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:31:56.372315   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:31:56.372325   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:31:56.383896   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:31:56.383906   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:31:56.421588   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:31:56.421599   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:31:56.446945   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:31:56.446955   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:31:56.460072   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:31:56.460083   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:31:56.473023   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:31:56.473035   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:31:56.485215   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:31:56.485228   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:31:56.489418   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:31:56.489424   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:31:56.503712   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:31:56.503722   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:31:56.525427   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:31:56.525437   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:31:56.561519   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:31:56.561529   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:31:56.578862   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:31:56.578872   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:31:56.592806   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:31:56.592817   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:31:56.606801   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:31:56.606812   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:31:59.123217   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:04.125543   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:04.125927   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:04.160681   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:04.160812   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:04.180663   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:04.180762   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:04.195540   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:04.195618   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:04.208398   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:04.208471   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:04.219583   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:04.219658   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:04.230744   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:04.230809   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:04.241527   18178 logs.go:276] 0 containers: []
	W0729 04:32:04.241538   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:04.241599   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:04.252100   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:04.252119   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:04.252125   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:04.287702   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:04.287712   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:04.306057   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:04.306067   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:04.317205   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:04.317218   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:04.334741   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:04.334752   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:04.346484   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:04.346498   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:04.370024   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:04.370032   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:04.381856   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:04.381867   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:04.386851   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:04.386859   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:04.422850   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:04.422861   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:32:04.438970   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:04.438983   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:04.454391   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:04.454404   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:04.466040   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:04.466053   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:04.482650   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:04.482662   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:04.496117   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:04.496129   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:04.517876   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:04.517894   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:04.544329   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:04.544343   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:07.061550   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:12.063641   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:12.063758   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:12.082149   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:12.082243   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:12.093144   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:12.093224   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:12.104091   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:12.104160   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:12.114718   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:12.114788   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:12.125367   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:12.125429   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:12.143149   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:12.143216   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:12.154335   18178 logs.go:276] 0 containers: []
	W0729 04:32:12.154346   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:12.154398   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:12.165174   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:12.165189   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:12.165195   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:12.177483   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:12.177499   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:12.182180   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:12.182187   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:12.198724   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:12.198739   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:12.211203   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:12.211213   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:12.235950   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:12.235964   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:12.248067   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:12.248081   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:12.282319   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:12.282336   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:12.306512   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:12.306525   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:32:12.320739   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:12.320753   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:12.338822   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:12.338836   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:12.349833   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:12.349845   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:12.385253   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:12.385262   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:12.399186   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:12.399198   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:12.412445   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:12.412460   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:12.423688   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:12.423704   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:12.437739   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:12.437756   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:14.954483   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:19.956708   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:19.956878   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:19.970173   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:19.970251   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:19.981532   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:19.981598   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:19.997010   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:19.997076   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:20.007928   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:20.007994   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:20.021682   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:20.021753   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:20.032440   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:20.032498   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:20.043284   18178 logs.go:276] 0 containers: []
	W0729 04:32:20.043298   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:20.043353   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:20.053766   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:20.053785   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:20.053791   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:20.065635   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:20.065646   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:20.077283   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:20.077299   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:20.089601   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:20.089614   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:20.094527   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:20.094537   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:20.108663   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:20.108674   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:32:20.122323   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:20.122334   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:20.138476   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:20.138487   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:20.149334   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:20.149345   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:20.172423   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:20.172434   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:20.207017   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:20.207026   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:20.242119   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:20.242134   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:20.267235   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:20.267247   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:20.281099   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:20.281112   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:20.301878   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:20.301890   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:20.319277   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:20.319288   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:20.334977   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:20.334989   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:22.848267   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:27.850802   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:27.850915   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:27.863596   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:27.863677   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:27.876317   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:27.876390   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:27.886830   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:27.886898   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:27.897580   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:27.897647   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:27.908590   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:27.908658   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:27.919241   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:27.919303   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:27.929243   18178 logs.go:276] 0 containers: []
	W0729 04:32:27.929256   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:27.929308   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:27.939980   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:27.939996   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:27.940003   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:27.964242   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:27.964251   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:27.977744   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:27.977753   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:27.989135   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:27.989146   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:28.007060   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:28.007070   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:32:28.020873   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:28.020883   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:28.056076   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:28.056086   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:28.075769   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:28.075778   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:28.101734   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:28.101750   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:28.119459   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:28.119474   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:28.131313   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:28.131323   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:28.159539   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:28.159553   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:28.174185   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:28.174195   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:28.185816   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:28.185827   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:28.198289   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:28.198301   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:28.233064   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:28.233072   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:28.237605   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:28.237612   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:30.751073   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:35.751964   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:35.752141   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:35.763940   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:35.764016   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:35.775247   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:35.775316   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:35.786802   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:35.786868   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:35.797224   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:35.797283   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:35.807982   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:35.808046   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:35.826548   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:35.826613   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:35.840892   18178 logs.go:276] 0 containers: []
	W0729 04:32:35.840905   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:35.840960   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:35.853234   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:35.853255   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:35.853262   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:35.867187   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:35.867203   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:35.878542   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:35.878553   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:35.894739   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:35.894751   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:35.912622   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:35.912633   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:35.937585   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:35.937597   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:35.975299   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:35.975309   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:35.989366   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:35.989375   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:36.014927   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:36.014941   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:36.026914   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:36.026930   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:36.038408   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:36.038420   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:36.050327   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:36.050338   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:36.054836   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:36.054843   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:36.090951   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:36.090966   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:36.102861   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:36.102873   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:32:36.117377   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:36.117390   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:36.131722   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:36.131732   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:38.648590   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:43.649537   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:43.649685   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:43.661851   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:43.661946   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:43.673492   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:43.673565   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:43.684148   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:43.684215   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:43.695127   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:43.695196   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:43.706006   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:43.706070   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:43.716828   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:43.716892   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:43.727828   18178 logs.go:276] 0 containers: []
	W0729 04:32:43.727840   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:43.727897   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:43.738876   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:43.738893   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:43.738899   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:43.776050   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:43.776061   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:43.800179   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:43.800189   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:43.813856   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:43.813867   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:43.827271   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:43.827283   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:43.842456   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:43.842469   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:43.858453   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:43.858464   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:43.871742   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:43.871755   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:43.885059   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:43.885075   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:43.953915   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:43.953938   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:43.967279   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:43.967295   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:43.982078   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:43.982095   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:32:43.996928   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:43.996944   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:44.017581   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:44.017597   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:44.032697   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:44.032711   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:44.037572   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:44.037583   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:44.074215   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:44.074228   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:46.590141   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:51.592512   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:51.592646   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:51.603539   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:51.603612   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:51.615020   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:51.615100   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:51.626189   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:51.626261   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:51.637140   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:51.637205   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:51.648567   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:51.648639   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:51.660640   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:51.660708   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:51.671363   18178 logs.go:276] 0 containers: []
	W0729 04:32:51.671376   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:51.671432   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:51.686071   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:51.686089   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:51.686095   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:51.700349   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:51.700366   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:51.717790   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:51.717804   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:51.737654   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:51.737665   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:51.761943   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:51.761954   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:51.774281   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:51.774296   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:51.788549   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:51.788562   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:51.802928   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:51.802938   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:51.817243   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:51.817258   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:51.831772   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:51.831781   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:51.866078   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:51.866088   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:51.889489   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:51.889499   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:51.900701   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:51.900712   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:51.905131   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:51.905138   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:32:51.919804   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:51.919814   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:51.954559   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:51.954570   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:51.990790   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:51.990800   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:54.506808   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:59.508918   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:59.509077   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:59.530440   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:59.530509   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:59.546395   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:59.546469   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:59.558708   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:59.558784   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:59.571439   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:59.571517   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:59.582658   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:59.582738   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:59.594593   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:59.594667   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:59.613052   18178 logs.go:276] 0 containers: []
	W0729 04:32:59.613066   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:59.613135   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:59.629493   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:59.629513   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:59.629520   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:59.643307   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:59.643320   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:59.657335   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:59.657348   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:59.673779   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:59.673791   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:59.709382   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:59.709395   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:59.724663   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:59.724674   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:59.740207   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:59.740220   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:59.759274   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:59.759287   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:59.772949   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:59.772962   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:59.785441   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:59.785456   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:59.799659   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:59.799672   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:59.840137   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:59.840153   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:59.845140   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:59.845151   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:59.861989   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:59.862012   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:59.886405   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:59.886429   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:59.902461   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:59.902477   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:59.933101   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:59.933130   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:33:02.450164   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:07.452271   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:07.452444   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:33:07.469648   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:33:07.469743   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:33:07.482861   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:33:07.482930   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:33:07.494142   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:33:07.494213   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:33:07.509186   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:33:07.509255   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:33:07.519840   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:33:07.519905   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:33:07.530753   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:33:07.530817   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:33:07.542832   18178 logs.go:276] 0 containers: []
	W0729 04:33:07.542846   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:33:07.542903   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:33:07.553508   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:33:07.553526   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:33:07.553531   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:33:07.567774   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:33:07.567786   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:33:07.584979   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:33:07.584990   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:33:07.596765   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:33:07.596779   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:33:07.634075   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:33:07.634088   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:33:07.638443   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:33:07.638451   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:33:07.663982   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:33:07.663998   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:33:07.675690   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:33:07.675701   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:33:07.712947   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:33:07.712959   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:33:07.728088   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:33:07.728099   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:33:07.744104   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:33:07.744115   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:33:07.757308   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:33:07.757320   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:33:07.775433   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:33:07.775445   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:33:07.786784   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:33:07.786797   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:33:07.801711   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:33:07.801721   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:33:07.817012   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:33:07.817024   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:33:07.828590   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:33:07.828602   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:33:10.355057   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:15.356799   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:15.356989   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:33:15.376557   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:33:15.376657   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:33:15.390906   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:33:15.390989   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:33:15.402225   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:33:15.402302   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:33:15.417240   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:33:15.417311   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:33:15.427904   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:33:15.427987   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:33:15.438081   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:33:15.438150   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:33:15.448037   18178 logs.go:276] 0 containers: []
	W0729 04:33:15.448047   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:33:15.448108   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:33:15.461446   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:33:15.461465   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:33:15.461471   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:33:15.475154   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:33:15.475168   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:33:15.500481   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:33:15.500494   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:33:15.527382   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:33:15.527395   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:33:15.542285   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:33:15.542298   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:33:15.554141   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:33:15.554152   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:33:15.568497   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:33:15.568511   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:33:15.585724   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:33:15.585738   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:33:15.601530   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:33:15.601543   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:33:15.613209   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:33:15.613220   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:33:15.649763   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:33:15.649772   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:33:15.685885   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:33:15.685896   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:33:15.699485   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:33:15.699495   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:33:15.711393   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:33:15.711406   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:33:15.723375   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:33:15.723386   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:33:15.728309   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:33:15.728317   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:33:15.742862   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:33:15.742872   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:33:18.269405   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:23.271492   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:23.271529   18178 kubeadm.go:597] duration metric: took 4m4.227687458s to restartPrimaryControlPlane
	W0729 04:33:23.271563   18178 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 04:33:23.271591   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 04:33:24.217554   18178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 04:33:24.222603   18178 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:33:24.225390   18178 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:33:24.227981   18178 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:33:24.227987   18178 kubeadm.go:157] found existing configuration files:
	
	I0729 04:33:24.228008   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/admin.conf
	I0729 04:33:24.230562   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:33:24.230587   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:33:24.233078   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/kubelet.conf
	I0729 04:33:24.235781   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:33:24.235802   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:33:24.238775   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/controller-manager.conf
	I0729 04:33:24.241264   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:33:24.241284   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:33:24.243989   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/scheduler.conf
	I0729 04:33:24.247030   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:33:24.247053   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:33:24.249719   18178 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 04:33:24.267140   18178 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 04:33:24.267171   18178 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 04:33:24.314720   18178 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 04:33:24.314780   18178 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 04:33:24.314848   18178 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 04:33:24.363457   18178 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 04:33:24.367546   18178 out.go:204]   - Generating certificates and keys ...
	I0729 04:33:24.367593   18178 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 04:33:24.367625   18178 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 04:33:24.367669   18178 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 04:33:24.367703   18178 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 04:33:24.367743   18178 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 04:33:24.367773   18178 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 04:33:24.367806   18178 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 04:33:24.367839   18178 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 04:33:24.367876   18178 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 04:33:24.367909   18178 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 04:33:24.367927   18178 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 04:33:24.367955   18178 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 04:33:24.492064   18178 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 04:33:24.681792   18178 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 04:33:24.751412   18178 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 04:33:24.786224   18178 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 04:33:24.816612   18178 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 04:33:24.816967   18178 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 04:33:24.817089   18178 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 04:33:24.895716   18178 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 04:33:24.898664   18178 out.go:204]   - Booting up control plane ...
	I0729 04:33:24.898747   18178 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 04:33:24.898792   18178 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 04:33:24.899248   18178 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 04:33:24.899294   18178 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 04:33:24.899375   18178 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 04:33:29.402778   18178 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504306 seconds
	I0729 04:33:29.402844   18178 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 04:33:29.407090   18178 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 04:33:29.924240   18178 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 04:33:29.924574   18178 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-317000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 04:33:30.427806   18178 kubeadm.go:310] [bootstrap-token] Using token: smrxp0.0qq2oz84ss0v9vcx
	I0729 04:33:30.434051   18178 out.go:204]   - Configuring RBAC rules ...
	I0729 04:33:30.434106   18178 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 04:33:30.434148   18178 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 04:33:30.439690   18178 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 04:33:30.440562   18178 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 04:33:30.446045   18178 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 04:33:30.450156   18178 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 04:33:30.453430   18178 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 04:33:30.630542   18178 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 04:33:30.833280   18178 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 04:33:30.833668   18178 kubeadm.go:310] 
	I0729 04:33:30.833699   18178 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 04:33:30.833720   18178 kubeadm.go:310] 
	I0729 04:33:30.833786   18178 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 04:33:30.833816   18178 kubeadm.go:310] 
	I0729 04:33:30.833849   18178 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 04:33:30.833879   18178 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 04:33:30.833903   18178 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 04:33:30.833917   18178 kubeadm.go:310] 
	I0729 04:33:30.834004   18178 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 04:33:30.834039   18178 kubeadm.go:310] 
	I0729 04:33:30.834067   18178 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 04:33:30.834075   18178 kubeadm.go:310] 
	I0729 04:33:30.834124   18178 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 04:33:30.834185   18178 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 04:33:30.834313   18178 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 04:33:30.834319   18178 kubeadm.go:310] 
	I0729 04:33:30.834359   18178 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 04:33:30.834401   18178 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 04:33:30.834408   18178 kubeadm.go:310] 
	I0729 04:33:30.834499   18178 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token smrxp0.0qq2oz84ss0v9vcx \
	I0729 04:33:30.834586   18178 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:61250418a92f64cc21f880dcd095606f8607c1c11d80f25df99b7d542aabf8c2 \
	I0729 04:33:30.834620   18178 kubeadm.go:310] 	--control-plane 
	I0729 04:33:30.834624   18178 kubeadm.go:310] 
	I0729 04:33:30.834664   18178 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 04:33:30.834679   18178 kubeadm.go:310] 
	I0729 04:33:30.834797   18178 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token smrxp0.0qq2oz84ss0v9vcx \
	I0729 04:33:30.834872   18178 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:61250418a92f64cc21f880dcd095606f8607c1c11d80f25df99b7d542aabf8c2 
	I0729 04:33:30.834979   18178 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 04:33:30.835076   18178 cni.go:84] Creating CNI manager for ""
	I0729 04:33:30.835107   18178 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:33:30.841647   18178 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 04:33:30.849663   18178 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 04:33:30.852530   18178 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 04:33:30.857387   18178 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 04:33:30.857433   18178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 04:33:30.857511   18178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-317000 minikube.k8s.io/updated_at=2024_07_29T04_33_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=running-upgrade-317000 minikube.k8s.io/primary=true
	I0729 04:33:30.906674   18178 kubeadm.go:1113] duration metric: took 49.282833ms to wait for elevateKubeSystemPrivileges
	I0729 04:33:30.906688   18178 ops.go:34] apiserver oom_adj: -16
	I0729 04:33:30.906693   18178 kubeadm.go:394] duration metric: took 4m11.876413375s to StartCluster
	I0729 04:33:30.906703   18178 settings.go:142] acquiring lock: {Name:mk7d7deaddc5161eee59fbf4fca49f66001c194c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:33:30.906870   18178 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:33:30.907278   18178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/kubeconfig: {Name:mk01c5aa9060b104010e51a5796278cdf7a7a206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:33:30.907499   18178 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:33:30.907510   18178 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 04:33:30.907550   18178 config.go:182] Loaded profile config "running-upgrade-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:33:30.907553   18178 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-317000"
	I0729 04:33:30.907565   18178 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-317000"
	W0729 04:33:30.907568   18178 addons.go:243] addon storage-provisioner should already be in state true
	I0729 04:33:30.907580   18178 host.go:66] Checking if "running-upgrade-317000" exists ...
	I0729 04:33:30.907580   18178 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-317000"
	I0729 04:33:30.907591   18178 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-317000"
	I0729 04:33:30.908411   18178 kapi.go:59] client config for running-upgrade-317000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/client.key", CAFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ccc080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:33:30.908532   18178 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-317000"
	W0729 04:33:30.908537   18178 addons.go:243] addon default-storageclass should already be in state true
	I0729 04:33:30.908542   18178 host.go:66] Checking if "running-upgrade-317000" exists ...
	I0729 04:33:30.911585   18178 out.go:177] * Verifying Kubernetes components...
	I0729 04:33:30.911941   18178 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 04:33:30.917952   18178 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 04:33:30.917959   18178 sshutil.go:53] new ssh client: &{IP:localhost Port:53107 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/running-upgrade-317000/id_rsa Username:docker}
	I0729 04:33:30.921527   18178 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:33:30.924560   18178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:33:30.928597   18178 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:33:30.928604   18178 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 04:33:30.928609   18178 sshutil.go:53] new ssh client: &{IP:localhost Port:53107 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/running-upgrade-317000/id_rsa Username:docker}
	I0729 04:33:31.018382   18178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:33:31.023832   18178 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:33:31.023876   18178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:33:31.028646   18178 api_server.go:72] duration metric: took 121.139584ms to wait for apiserver process to appear ...
	I0729 04:33:31.028654   18178 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:33:31.028660   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:31.042006   18178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 04:33:31.065692   18178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:33:36.030723   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:36.030807   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:41.031335   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:41.031377   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:46.031781   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:46.031848   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:51.032374   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:51.032436   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:56.033281   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:56.033338   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:01.034297   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:01.034355   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 04:34:01.379261   18178 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 04:34:01.383569   18178 out.go:177] * Enabled addons: storage-provisioner
	I0729 04:34:01.390487   18178 addons.go:510] duration metric: took 30.483747083s for enable addons: enabled=[storage-provisioner]
	I0729 04:34:06.035663   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:06.035721   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:11.037530   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:11.037580   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:16.039895   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:16.039914   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:21.041220   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:21.041273   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:26.043456   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:26.043502   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:31.044847   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:31.044968   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:31.056808   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:34:31.056875   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:31.067908   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:34:31.067978   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:31.078489   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:34:31.078564   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:31.089824   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:34:31.089891   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:31.100269   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:34:31.100335   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:31.111448   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:34:31.111516   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:31.121331   18178 logs.go:276] 0 containers: []
	W0729 04:34:31.121345   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:31.121397   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:31.131761   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:34:31.131775   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:34:31.131780   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:34:31.146834   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:34:31.146849   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:34:31.158656   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:34:31.158667   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:34:31.176445   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:31.176456   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:31.201503   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:34:31.201514   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:34:31.213137   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:31.213151   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:31.217425   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:31.217434   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:31.252886   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:34:31.252898   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:34:31.267282   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:34:31.267293   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:34:31.281209   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:34:31.281220   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:34:31.292509   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:34:31.292522   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:34:31.303854   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:34:31.303868   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:31.315125   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:31.315135   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:33.854512   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:38.856670   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:38.856772   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:38.872139   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:34:38.872216   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:38.888228   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:34:38.888293   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:38.898584   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:34:38.898650   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:38.908794   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:34:38.908870   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:38.919686   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:34:38.919747   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:38.930130   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:34:38.930192   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:38.940251   18178 logs.go:276] 0 containers: []
	W0729 04:34:38.940262   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:38.940315   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:38.950554   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:34:38.950572   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:34:38.950578   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:34:38.961592   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:34:38.961604   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:34:38.973074   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:34:38.973085   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:34:38.984827   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:34:38.984836   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:38.995970   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:38.995983   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:39.033674   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:39.033685   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:39.037976   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:34:39.037985   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:34:39.052210   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:34:39.052221   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:34:39.068654   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:39.068668   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:39.092727   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:39.092738   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:39.128763   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:34:39.128775   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:34:39.145461   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:34:39.145473   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:34:39.163910   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:34:39.163920   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:34:41.677585   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:46.680110   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:46.680499   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:46.718792   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:34:46.718930   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:46.744572   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:34:46.744669   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:46.758765   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:34:46.758847   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:46.770378   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:34:46.770447   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:46.792850   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:34:46.792919   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:46.805794   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:34:46.805897   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:46.816452   18178 logs.go:276] 0 containers: []
	W0729 04:34:46.816463   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:46.816525   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:46.826926   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:34:46.826938   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:46.826944   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:46.872651   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:46.872668   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:46.877675   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:46.877686   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:46.913617   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:34:46.913630   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:34:46.927743   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:34:46.927755   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:34:46.941542   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:34:46.941554   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:34:46.953236   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:34:46.953248   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:34:46.967714   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:34:46.967727   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:46.979252   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:34:46.979264   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:34:46.991286   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:34:46.991297   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:34:47.003408   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:34:47.003420   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:34:47.024540   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:34:47.024553   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:34:47.036697   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:47.036707   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:49.563770   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:54.566006   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:54.566209   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:54.594577   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:34:54.594698   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:54.613829   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:34:54.613908   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:54.627895   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:34:54.627970   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:54.639597   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:34:54.639658   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:54.656775   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:34:54.656841   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:54.667447   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:34:54.667510   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:54.677686   18178 logs.go:276] 0 containers: []
	W0729 04:34:54.677698   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:54.677750   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:54.687931   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:34:54.687947   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:54.687952   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:54.725634   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:54.725646   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:54.762159   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:34:54.762173   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:34:54.773708   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:34:54.773719   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:34:54.785695   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:34:54.785706   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:34:54.800785   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:34:54.800796   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:34:54.818892   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:34:54.818906   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:34:54.830394   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:34:54.830409   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:54.842262   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:54.842273   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:54.847181   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:34:54.847190   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:34:54.867383   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:34:54.867393   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:34:54.885414   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:34:54.885428   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:34:54.897216   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:54.897230   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:57.422592   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:02.424725   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:02.424925   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:02.446424   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:02.446529   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:02.462598   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:02.462673   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:02.476196   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:02.476265   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:02.486586   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:02.486650   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:02.500489   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:02.500556   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:02.510808   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:02.510872   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:02.531030   18178 logs.go:276] 0 containers: []
	W0729 04:35:02.531043   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:02.531101   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:02.542031   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:02.542049   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:02.542055   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:02.556065   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:02.556079   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:02.567080   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:02.567091   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:02.582531   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:02.582545   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:02.596802   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:02.596812   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:02.614447   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:02.614457   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:02.626052   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:02.626061   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:02.650881   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:02.650888   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:02.686064   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:02.686077   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:02.691172   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:02.691181   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:02.708229   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:02.708239   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:02.720904   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:02.720920   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:02.732676   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:02.732692   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:05.270206   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:10.271383   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:10.271584   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:10.291667   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:10.291760   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:10.313686   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:10.313757   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:10.324835   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:10.324896   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:10.335022   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:10.335085   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:10.345952   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:10.346018   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:10.356218   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:10.356290   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:10.366452   18178 logs.go:276] 0 containers: []
	W0729 04:35:10.366465   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:10.366532   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:10.376717   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:10.376733   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:10.376738   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:10.387972   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:10.387981   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:10.424003   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:10.424012   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:10.428151   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:10.428160   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:10.442020   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:10.442031   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:10.455675   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:10.455686   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:10.466872   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:10.466886   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:10.481465   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:10.481476   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:10.498691   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:10.498700   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:10.509968   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:10.509979   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:10.544575   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:10.544586   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:10.556092   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:10.556103   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:10.567368   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:10.567380   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:13.092872   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:18.095164   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:18.095345   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:18.120489   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:18.120572   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:18.133864   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:18.133936   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:18.145351   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:18.145412   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:18.155623   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:18.155687   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:18.168717   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:18.168781   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:18.180029   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:18.180085   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:18.190038   18178 logs.go:276] 0 containers: []
	W0729 04:35:18.190051   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:18.190102   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:18.207762   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:18.207777   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:18.207783   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:18.212181   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:18.212190   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:18.223307   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:18.223318   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:18.235680   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:18.235693   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:18.248341   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:18.248355   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:18.285971   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:18.285984   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:18.300424   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:18.300438   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:18.314824   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:18.314839   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:18.330523   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:18.330537   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:18.347072   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:18.347086   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:18.364553   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:18.364565   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:18.376417   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:18.376427   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:18.399369   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:18.399377   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:20.934766   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:25.937033   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:25.937272   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:25.955592   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:25.955676   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:25.972807   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:25.972882   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:25.987358   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:25.987423   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:25.998022   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:25.998097   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:26.008538   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:26.008610   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:26.018986   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:26.019052   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:26.029113   18178 logs.go:276] 0 containers: []
	W0729 04:35:26.029125   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:26.029180   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:26.039510   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:26.039525   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:26.039530   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:26.050520   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:26.050533   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:26.062321   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:26.062332   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:26.079799   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:26.079812   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:26.091428   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:26.091440   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:26.129394   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:26.129413   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:26.134569   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:26.134576   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:26.171775   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:26.171787   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:26.186153   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:26.186164   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:26.202236   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:26.202247   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:26.213727   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:26.213739   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:26.228380   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:26.228390   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:26.253030   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:26.253037   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:28.766882   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:33.769106   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:33.769370   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:33.793371   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:33.793471   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:33.809710   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:33.809787   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:33.822417   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:33.822492   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:33.833136   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:33.833203   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:33.844122   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:33.844194   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:33.854866   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:33.854936   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:33.865094   18178 logs.go:276] 0 containers: []
	W0729 04:35:33.865105   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:33.865160   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:33.875294   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:33.875308   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:33.875314   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:33.887569   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:33.887583   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:33.905303   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:33.905314   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:33.921315   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:33.921324   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:33.932948   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:33.932959   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:33.966842   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:33.966853   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:33.980847   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:33.980861   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:33.992679   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:33.992695   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:34.006961   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:34.006970   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:34.019538   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:34.019549   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:34.044235   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:34.044244   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:34.080977   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:34.080987   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:34.085468   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:34.085475   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:36.600006   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:41.602291   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:41.602513   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:41.625993   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:41.626109   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:41.641982   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:41.642056   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:41.654770   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:41.654838   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:41.665817   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:41.665874   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:41.675951   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:41.676022   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:41.686308   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:41.686373   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:41.696417   18178 logs.go:276] 0 containers: []
	W0729 04:35:41.696430   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:41.696494   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:41.707037   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:41.707055   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:41.707061   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:41.742559   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:41.742566   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:41.746483   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:41.746492   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:41.780563   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:41.780574   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:41.799179   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:41.799190   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:41.813814   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:41.813827   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:41.825263   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:41.825274   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:41.837144   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:41.837158   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:41.851509   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:41.851523   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:41.863583   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:41.863596   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:41.881166   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:41.881178   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:41.893983   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:41.893992   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:41.916880   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:41.916890   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:44.430172   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:49.431595   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:49.431697   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:49.442908   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:49.442978   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:49.455924   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:49.455999   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:49.471666   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:49.471735   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:49.485975   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:49.486046   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:49.496994   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:49.497060   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:49.507761   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:49.507822   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:49.518455   18178 logs.go:276] 0 containers: []
	W0729 04:35:49.518465   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:49.518518   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:49.529079   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:49.529095   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:49.529101   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:49.546953   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:49.546963   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:49.582565   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:35:49.582576   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:35:49.594185   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:49.594196   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:49.605486   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:49.605498   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:49.616963   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:49.616974   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:49.640248   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:49.640256   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:49.651897   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:49.651908   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:49.656885   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:35:49.656893   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:35:49.668855   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:49.668867   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:49.680943   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:49.680955   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:49.693270   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:49.693279   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:49.730478   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:49.730486   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:49.746453   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:49.746462   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:49.760637   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:49.760652   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:52.277298   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:57.279633   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:57.279827   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:57.296342   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:57.296418   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:57.310979   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:57.311040   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:57.322885   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:57.322953   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:57.333550   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:57.333612   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:57.343977   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:57.344036   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:57.354792   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:57.354856   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:57.365516   18178 logs.go:276] 0 containers: []
	W0729 04:35:57.365527   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:57.365577   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:57.376751   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:57.376766   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:57.376772   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:57.397173   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:57.397185   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:57.421721   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:35:57.421727   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:35:57.433208   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:35:57.433223   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:35:57.445679   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:57.445693   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:57.484332   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:57.484346   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:57.496538   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:57.496552   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:57.508494   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:57.508508   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:57.547887   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:57.547898   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:57.568369   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:57.568381   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:57.582322   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:57.582334   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:57.596828   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:57.596840   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:57.608551   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:57.608563   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:57.620110   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:57.620120   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:57.625094   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:57.625101   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:00.140607   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:05.142799   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:05.143031   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:05.161816   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:05.161908   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:05.175784   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:05.175844   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:05.187434   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:05.187506   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:05.201532   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:05.201597   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:05.212143   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:05.212215   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:05.226629   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:05.226720   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:05.236648   18178 logs.go:276] 0 containers: []
	W0729 04:36:05.236659   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:05.236716   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:05.247559   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:05.247575   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:05.247580   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:05.263252   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:05.263262   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:05.278655   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:05.278666   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:05.290292   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:05.290304   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:05.294941   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:05.294950   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:05.330279   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:05.330290   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:05.344932   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:05.344943   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:05.358181   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:05.358191   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:05.393468   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:05.393476   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:05.404871   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:05.404883   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:05.421854   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:05.421864   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:05.440452   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:05.440464   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:05.454878   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:05.454890   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:05.465753   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:05.465763   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:05.480342   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:05.480355   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:08.005859   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:13.008189   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:13.008615   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:13.049246   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:13.049418   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:13.071420   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:13.071541   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:13.087455   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:13.087535   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:13.100057   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:13.100125   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:13.111422   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:13.111497   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:13.126253   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:13.126322   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:13.137295   18178 logs.go:276] 0 containers: []
	W0729 04:36:13.137306   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:13.137365   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:13.149348   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:13.149368   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:13.149373   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:13.161189   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:13.161203   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:13.173354   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:13.173367   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:13.184891   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:13.184906   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:13.211854   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:13.211865   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:13.223381   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:13.223392   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:13.261371   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:13.261382   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:13.265466   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:13.265474   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:13.279133   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:13.279143   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:13.295191   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:13.295201   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:13.309981   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:13.310038   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:13.322572   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:13.322584   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:13.341373   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:13.341388   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:13.379525   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:13.379537   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:13.394927   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:13.394942   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:15.908168   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:20.910544   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:20.910778   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:20.943312   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:20.943405   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:20.959260   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:20.959334   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:20.970999   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:20.971068   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:20.981629   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:20.981694   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:20.992281   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:20.992356   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:21.002625   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:21.002682   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:21.012544   18178 logs.go:276] 0 containers: []
	W0729 04:36:21.012555   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:21.012608   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:21.028658   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:21.028675   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:21.028680   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:21.053252   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:21.053259   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:21.089210   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:21.089222   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:21.100399   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:21.100410   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:21.112255   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:21.112266   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:21.127562   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:21.127573   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:21.139433   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:21.139443   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:21.151582   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:21.151594   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:21.189067   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:21.189076   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:21.203177   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:21.203190   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:21.217184   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:21.217195   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:21.229200   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:21.229212   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:21.246576   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:21.246587   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:21.258710   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:21.258724   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:21.270765   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:21.270776   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:23.777690   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:28.779913   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:28.780192   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:28.805055   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:28.805178   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:28.821078   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:28.821159   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:28.833913   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:28.833993   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:28.844963   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:28.845030   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:28.855222   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:28.855288   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:28.865618   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:28.865687   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:28.875775   18178 logs.go:276] 0 containers: []
	W0729 04:36:28.875789   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:28.875841   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:28.886303   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:28.886321   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:28.886327   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:28.897815   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:28.897826   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:28.909521   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:28.909535   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:28.920976   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:28.920986   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:28.946452   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:28.946465   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:28.958484   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:28.958497   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:28.970169   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:28.970181   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:28.981787   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:28.981799   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:28.993424   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:28.993435   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:28.998339   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:28.998346   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:29.012874   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:29.012889   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:29.048645   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:29.048654   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:29.082633   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:29.082647   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:29.097305   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:29.097319   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:29.122859   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:29.122873   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:31.643623   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:36.645935   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:36.646078   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:36.662152   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:36.662237   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:36.675368   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:36.675443   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:36.686083   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:36.686156   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:36.704001   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:36.704062   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:36.716180   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:36.716257   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:36.726953   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:36.727032   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:36.737155   18178 logs.go:276] 0 containers: []
	W0729 04:36:36.737169   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:36.737223   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:36.747977   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:36.747994   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:36.747999   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:36.768212   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:36.768221   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:36.779476   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:36.779486   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:36.795816   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:36.795826   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:36.810381   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:36.810394   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:36.828186   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:36.828197   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:36.863613   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:36.863624   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:36.877481   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:36.877491   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:36.889370   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:36.889384   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:36.900943   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:36.900952   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:36.905869   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:36.905877   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:36.917627   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:36.917637   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:36.931127   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:36.931138   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:36.943060   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:36.943070   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:36.966823   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:36.966834   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:39.504195   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:44.506318   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:44.506535   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:44.523635   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:44.523715   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:44.537148   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:44.537222   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:44.548747   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:44.548814   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:44.563355   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:44.563420   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:44.573924   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:44.573987   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:44.584582   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:44.584657   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:44.594807   18178 logs.go:276] 0 containers: []
	W0729 04:36:44.594818   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:44.594872   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:44.605420   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:44.605439   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:44.605446   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:44.618757   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:44.618768   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:44.643252   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:44.643259   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:44.680167   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:44.680176   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:44.715441   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:44.715456   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:44.729428   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:44.729439   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:44.734074   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:44.734082   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:44.747393   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:44.747406   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:44.759273   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:44.759287   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:44.773355   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:44.773366   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:44.784878   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:44.784889   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:44.803881   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:44.803893   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:44.815855   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:44.815866   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:44.827776   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:44.827790   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:44.842923   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:44.842932   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:47.372358   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:52.374524   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:52.374739   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:52.399775   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:52.399883   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:52.416063   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:52.416131   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:52.430870   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:52.430939   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:52.441918   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:52.441990   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:52.452806   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:52.452871   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:52.463730   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:52.463805   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:52.474444   18178 logs.go:276] 0 containers: []
	W0729 04:36:52.474456   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:52.474511   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:52.485177   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:52.485195   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:52.485200   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:52.489913   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:52.489920   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:52.505025   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:52.505034   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:52.516667   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:52.516680   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:52.528295   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:52.528304   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:52.566341   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:52.566351   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:52.583171   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:52.583183   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:52.594711   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:52.594725   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:52.606222   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:52.606231   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:52.620797   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:52.620809   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:52.638820   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:52.638834   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:52.662702   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:52.662720   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:52.680195   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:52.680206   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:52.694695   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:52.694707   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:52.706052   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:52.706067   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:55.242564   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:00.244795   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:00.244966   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:37:00.266210   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:37:00.266301   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:37:00.281499   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:37:00.281580   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:37:00.294125   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:37:00.294195   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:37:00.306855   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:37:00.306924   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:37:00.317258   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:37:00.317326   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:37:00.327676   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:37:00.327743   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:37:00.342050   18178 logs.go:276] 0 containers: []
	W0729 04:37:00.342062   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:37:00.342124   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:37:00.352827   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:37:00.352846   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:37:00.352851   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:37:00.377382   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:37:00.377388   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:37:00.390769   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:37:00.390780   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:37:00.405424   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:37:00.405435   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:37:00.417568   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:37:00.417579   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:37:00.455492   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:37:00.455505   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:37:00.469713   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:37:00.469724   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:37:00.481440   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:37:00.481451   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:37:00.498876   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:37:00.498888   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:37:00.510812   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:37:00.510823   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:37:00.522796   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:37:00.522806   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:37:00.534333   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:37:00.534346   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:37:00.548309   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:37:00.548320   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:37:00.562850   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:37:00.562863   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:37:00.567392   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:37:00.567399   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:37:03.105204   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:08.107239   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:08.107353   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:37:08.118255   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:37:08.118321   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:37:08.128846   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:37:08.128916   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:37:08.139991   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:37:08.140060   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:37:08.150656   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:37:08.150722   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:37:08.161226   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:37:08.161291   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:37:08.175193   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:37:08.175267   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:37:08.187005   18178 logs.go:276] 0 containers: []
	W0729 04:37:08.187020   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:37:08.187082   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:37:08.197401   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:37:08.197418   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:37:08.197423   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:37:08.211847   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:37:08.211861   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:37:08.223518   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:37:08.223532   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:37:08.241466   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:37:08.241476   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:37:08.276938   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:37:08.276952   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:37:08.312967   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:37:08.312981   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:37:08.330227   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:37:08.330237   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:37:08.355071   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:37:08.355082   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:37:08.371112   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:37:08.371124   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:37:08.383497   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:37:08.383508   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:37:08.388579   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:37:08.388587   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:37:08.404546   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:37:08.404559   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:37:08.416130   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:37:08.416141   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:37:08.428848   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:37:08.428860   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:37:08.440452   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:37:08.440466   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:37:10.953878   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:15.955679   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:15.955786   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:37:15.968673   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:37:15.968742   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:37:15.979242   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:37:15.979312   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:37:15.989516   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:37:15.989587   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:37:15.999970   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:37:16.000028   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:37:16.013314   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:37:16.013391   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:37:16.023622   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:37:16.023685   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:37:16.033767   18178 logs.go:276] 0 containers: []
	W0729 04:37:16.033780   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:37:16.033841   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:37:16.044255   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:37:16.044275   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:37:16.044281   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:37:16.056122   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:37:16.056136   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:37:16.070689   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:37:16.070703   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:37:16.082198   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:37:16.082210   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:37:16.093975   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:37:16.093985   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:37:16.105109   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:37:16.105126   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:37:16.140228   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:37:16.140239   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:37:16.158343   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:37:16.158354   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:37:16.174375   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:37:16.174386   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:37:16.189000   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:37:16.189013   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:37:16.225995   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:37:16.226008   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:37:16.230751   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:37:16.230760   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:37:16.246000   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:37:16.246012   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:37:16.259925   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:37:16.259939   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:37:16.272167   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:37:16.272178   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:37:18.797583   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:23.799680   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:23.799821   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:37:23.810845   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:37:23.810919   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:37:23.821960   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:37:23.822031   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:37:23.833880   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:37:23.833953   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:37:23.844776   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:37:23.844847   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:37:23.856279   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:37:23.856353   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:37:23.867986   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:37:23.868060   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:37:23.884144   18178 logs.go:276] 0 containers: []
	W0729 04:37:23.884157   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:37:23.884223   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:37:23.895572   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:37:23.895590   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:37:23.895596   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:37:23.911914   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:37:23.911930   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:37:23.927750   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:37:23.927763   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:37:23.945788   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:37:23.945806   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:37:23.971420   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:37:23.971434   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:37:23.976842   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:37:23.976856   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:37:24.016301   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:37:24.016314   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:37:24.028277   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:37:24.028290   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:37:24.044816   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:37:24.044828   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:37:24.056867   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:37:24.056878   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:37:24.096640   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:37:24.096661   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:37:24.108927   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:37:24.108940   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:37:24.120804   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:37:24.120818   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:37:24.133481   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:37:24.133493   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:37:24.148923   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:37:24.148941   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:37:26.663650   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:31.665830   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:31.670332   18178 out.go:177] 
	W0729 04:37:31.673315   18178 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 04:37:31.673328   18178 out.go:239] * 
	* 
	W0729 04:37:31.673938   18178 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:37:31.689277   18178 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-317000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-29 04:37:31.785371 -0700 PDT m=+1283.108644668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-317000 -n running-upgrade-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-317000 -n running-upgrade-317000: exit status 2 (15.648718542s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-317000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-006000          | force-systemd-flag-006000 | jenkins | v1.33.1 | 29 Jul 24 04:27 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-914000              | force-systemd-env-914000  | jenkins | v1.33.1 | 29 Jul 24 04:27 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-914000           | force-systemd-env-914000  | jenkins | v1.33.1 | 29 Jul 24 04:27 PDT | 29 Jul 24 04:27 PDT |
	| start   | -p docker-flags-060000                | docker-flags-060000       | jenkins | v1.33.1 | 29 Jul 24 04:27 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-006000             | force-systemd-flag-006000 | jenkins | v1.33.1 | 29 Jul 24 04:28 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-006000          | force-systemd-flag-006000 | jenkins | v1.33.1 | 29 Jul 24 04:28 PDT | 29 Jul 24 04:28 PDT |
	| start   | -p cert-expiration-855000             | cert-expiration-855000    | jenkins | v1.33.1 | 29 Jul 24 04:28 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-060000 ssh               | docker-flags-060000       | jenkins | v1.33.1 | 29 Jul 24 04:28 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-060000 ssh               | docker-flags-060000       | jenkins | v1.33.1 | 29 Jul 24 04:28 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-060000                | docker-flags-060000       | jenkins | v1.33.1 | 29 Jul 24 04:28 PDT | 29 Jul 24 04:28 PDT |
	| start   | -p cert-options-193000                | cert-options-193000       | jenkins | v1.33.1 | 29 Jul 24 04:28 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-193000 ssh               | cert-options-193000       | jenkins | v1.33.1 | 29 Jul 24 04:28 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-193000 -- sudo        | cert-options-193000       | jenkins | v1.33.1 | 29 Jul 24 04:28 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-193000                | cert-options-193000       | jenkins | v1.33.1 | 29 Jul 24 04:28 PDT | 29 Jul 24 04:28 PDT |
	| start   | -p running-upgrade-317000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 04:28 PDT | 29 Jul 24 04:29 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-317000             | running-upgrade-317000    | jenkins | v1.33.1 | 29 Jul 24 04:29 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-855000             | cert-expiration-855000    | jenkins | v1.33.1 | 29 Jul 24 04:31 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-855000             | cert-expiration-855000    | jenkins | v1.33.1 | 29 Jul 24 04:31 PDT | 29 Jul 24 04:31 PDT |
	| start   | -p kubernetes-upgrade-813000          | kubernetes-upgrade-813000 | jenkins | v1.33.1 | 29 Jul 24 04:31 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-813000          | kubernetes-upgrade-813000 | jenkins | v1.33.1 | 29 Jul 24 04:31 PDT | 29 Jul 24 04:31 PDT |
	| start   | -p kubernetes-upgrade-813000          | kubernetes-upgrade-813000 | jenkins | v1.33.1 | 29 Jul 24 04:31 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-813000          | kubernetes-upgrade-813000 | jenkins | v1.33.1 | 29 Jul 24 04:31 PDT | 29 Jul 24 04:31 PDT |
	| start   | -p stopped-upgrade-514000             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 04:31 PDT | 29 Jul 24 04:32 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-514000 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 04:32 PDT | 29 Jul 24 04:32 PDT |
	| start   | -p stopped-upgrade-514000             | stopped-upgrade-514000    | jenkins | v1.33.1 | 29 Jul 24 04:32 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 04:32:29
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 04:32:29.820872   18743 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:32:29.821025   18743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:32:29.821029   18743 out.go:304] Setting ErrFile to fd 2...
	I0729 04:32:29.821032   18743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:32:29.821186   18743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:32:29.822330   18743 out.go:298] Setting JSON to false
	I0729 04:32:29.840226   18743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9118,"bootTime":1722243631,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:32:29.840301   18743 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:32:29.846036   18743 out.go:177] * [stopped-upgrade-514000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:32:29.854097   18743 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:32:29.854149   18743 notify.go:220] Checking for updates...
	I0729 04:32:29.863109   18743 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:32:29.867035   18743 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:32:29.870083   18743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:32:29.873092   18743 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:32:29.876028   18743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:32:29.879326   18743 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:32:29.881993   18743 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 04:32:29.885053   18743 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:32:29.888082   18743 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:32:29.893990   18743 start.go:297] selected driver: qemu2
	I0729 04:32:29.893997   18743 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53363 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:32:29.894047   18743 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:32:29.896728   18743 cni.go:84] Creating CNI manager for ""
	I0729 04:32:29.896747   18743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:32:29.896782   18743 start.go:340] cluster config:
	{Name:stopped-upgrade-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53363 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:32:29.896832   18743 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:32:29.905072   18743 out.go:177] * Starting "stopped-upgrade-514000" primary control-plane node in "stopped-upgrade-514000" cluster
	I0729 04:32:29.908968   18743 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:32:29.908985   18743 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 04:32:29.908994   18743 cache.go:56] Caching tarball of preloaded images
	I0729 04:32:29.909062   18743 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:32:29.909069   18743 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 04:32:29.909116   18743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/config.json ...
	I0729 04:32:29.909472   18743 start.go:360] acquireMachinesLock for stopped-upgrade-514000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:32:29.909508   18743 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "stopped-upgrade-514000"
	I0729 04:32:29.909519   18743 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:32:29.909524   18743 fix.go:54] fixHost starting: 
	I0729 04:32:29.909626   18743 fix.go:112] recreateIfNeeded on stopped-upgrade-514000: state=Stopped err=<nil>
	W0729 04:32:29.909634   18743 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:32:29.917037   18743 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-514000" ...
	I0729 04:32:27.850802   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:27.850915   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:27.863596   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:27.863677   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:27.876317   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:27.876390   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:27.886830   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:27.886898   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:27.897580   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:27.897647   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:27.908590   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:27.908658   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:27.919241   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:27.919303   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:27.929243   18178 logs.go:276] 0 containers: []
	W0729 04:32:27.929256   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:27.929308   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:27.939980   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:27.939996   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:27.940003   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:27.964242   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:27.964251   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:27.977744   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:27.977753   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:27.989135   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:27.989146   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:28.007060   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:28.007070   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:32:28.020873   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:28.020883   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:28.056076   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:28.056086   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:28.075769   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:28.075778   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:28.101734   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:28.101750   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:28.119459   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:28.119474   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:28.131313   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:28.131323   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:28.159539   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:28.159553   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:28.174185   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:28.174195   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:28.185816   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:28.185827   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:28.198289   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:28.198301   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:28.233064   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:28.233072   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:28.237605   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:28.237612   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:30.751073   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:29.921061   18743 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:32:29.921127   18743 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53329-:22,hostfwd=tcp::53330-:2376,hostname=stopped-upgrade-514000 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/disk.qcow2
	I0729 04:32:29.967282   18743 main.go:141] libmachine: STDOUT: 
	I0729 04:32:29.967311   18743 main.go:141] libmachine: STDERR: 
	I0729 04:32:29.967317   18743 main.go:141] libmachine: Waiting for VM to start (ssh -p 53329 docker@127.0.0.1)...
	I0729 04:32:35.751964   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:35.752141   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:35.763940   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:35.764016   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:35.775247   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:35.775316   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:35.786802   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:35.786868   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:35.797224   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:35.797283   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:35.807982   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:35.808046   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:35.826548   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:35.826613   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:35.840892   18178 logs.go:276] 0 containers: []
	W0729 04:32:35.840905   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:35.840960   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:35.853234   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:35.853255   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:35.853262   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:35.867187   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:35.867203   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:35.878542   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:35.878553   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:35.894739   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:35.894751   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:35.912622   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:35.912633   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:35.937585   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:35.937597   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:35.975299   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:35.975309   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:35.989366   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:35.989375   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:36.014927   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:36.014941   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:36.026914   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:36.026930   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:36.038408   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:36.038420   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:36.050327   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:36.050338   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:36.054836   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:36.054843   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:36.090951   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:36.090966   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:36.102861   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:36.102873   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:32:36.117377   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:36.117390   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:36.131722   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:36.131732   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:38.648590   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:43.649537   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:43.649685   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:43.661851   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:43.661946   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:43.673492   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:43.673565   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:43.684148   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:43.684215   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:43.695127   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:43.695196   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:43.706006   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:43.706070   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:43.716828   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:43.716892   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:43.727828   18178 logs.go:276] 0 containers: []
	W0729 04:32:43.727840   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:43.727897   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:43.738876   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:43.738893   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:43.738899   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:43.776050   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:43.776061   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:43.800179   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:43.800189   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:43.813856   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:43.813867   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:43.827271   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:43.827283   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:43.842456   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:43.842469   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:43.858453   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:43.858464   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:43.871742   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:43.871755   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:43.885059   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:43.885075   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:43.953915   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:43.953938   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:43.967279   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:43.967295   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:43.982078   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:43.982095   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:32:43.996928   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:43.996944   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:44.017581   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:44.017597   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:44.032697   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:44.032711   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:44.037572   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:44.037583   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:44.074215   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:44.074228   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:46.590141   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:51.592512   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:51.592646   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:51.603539   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:51.603612   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:51.615020   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:51.615100   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:51.626189   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:51.626261   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:51.637140   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:51.637205   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:51.648567   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:51.648639   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:51.660640   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:51.660708   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:51.671363   18178 logs.go:276] 0 containers: []
	W0729 04:32:51.671376   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:51.671432   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:51.686071   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:51.686089   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:51.686095   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:51.700349   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:51.700366   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:51.717790   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:51.717804   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:51.737654   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:51.737665   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:51.761943   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:51.761954   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:51.774281   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:51.774296   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:51.788549   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:51.788562   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:51.802928   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:51.802938   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:51.817243   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:51.817258   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:51.831772   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:51.831781   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:51.866078   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:51.866088   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:51.889489   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:51.889499   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:51.900701   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:51.900712   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:51.905131   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:51.905138   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:32:51.919804   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:51.919814   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:51.954559   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:51.954570   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:51.990790   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:51.990800   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:50.119018   18743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/config.json ...
	I0729 04:32:50.119763   18743 machine.go:94] provisionDockerMachine start ...
	I0729 04:32:50.119971   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.120457   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.120471   18743 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 04:32:50.210542   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 04:32:50.210581   18743 buildroot.go:166] provisioning hostname "stopped-upgrade-514000"
	I0729 04:32:50.210701   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.210960   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.210973   18743 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-514000 && echo "stopped-upgrade-514000" | sudo tee /etc/hostname
	I0729 04:32:50.290770   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-514000
	
	I0729 04:32:50.290836   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.290980   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.290992   18743 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-514000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-514000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-514000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 04:32:50.359169   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 04:32:50.359182   18743 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19341-15486/.minikube CaCertPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19341-15486/.minikube}
	I0729 04:32:50.359190   18743 buildroot.go:174] setting up certificates
	I0729 04:32:50.359195   18743 provision.go:84] configureAuth start
	I0729 04:32:50.359203   18743 provision.go:143] copyHostCerts
	I0729 04:32:50.359280   18743 exec_runner.go:144] found /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.pem, removing ...
	I0729 04:32:50.359286   18743 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.pem
	I0729 04:32:50.359386   18743 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.pem (1078 bytes)
	I0729 04:32:50.359566   18743 exec_runner.go:144] found /Users/jenkins/minikube-integration/19341-15486/.minikube/cert.pem, removing ...
	I0729 04:32:50.359569   18743 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19341-15486/.minikube/cert.pem
	I0729 04:32:50.359629   18743 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19341-15486/.minikube/cert.pem (1123 bytes)
	I0729 04:32:50.360267   18743 exec_runner.go:144] found /Users/jenkins/minikube-integration/19341-15486/.minikube/key.pem, removing ...
	I0729 04:32:50.360270   18743 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19341-15486/.minikube/key.pem
	I0729 04:32:50.360324   18743 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19341-15486/.minikube/key.pem (1675 bytes)
	I0729 04:32:50.360414   18743 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-514000 san=[127.0.0.1 localhost minikube stopped-upgrade-514000]
	I0729 04:32:50.392972   18743 provision.go:177] copyRemoteCerts
	I0729 04:32:50.393019   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 04:32:50.393027   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	I0729 04:32:50.426194   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 04:32:50.432987   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 04:32:50.439415   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 04:32:50.446872   18743 provision.go:87] duration metric: took 87.674375ms to configureAuth
	I0729 04:32:50.446881   18743 buildroot.go:189] setting minikube options for container-runtime
	I0729 04:32:50.446990   18743 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:32:50.447021   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.447100   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.447105   18743 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 04:32:50.512045   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 04:32:50.512057   18743 buildroot.go:70] root file system type: tmpfs
	I0729 04:32:50.512112   18743 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 04:32:50.512159   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.512285   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.512323   18743 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 04:32:50.579703   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 04:32:50.579765   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.579873   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.579882   18743 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 04:32:50.958866   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 04:32:50.958881   18743 machine.go:97] duration metric: took 839.127375ms to provisionDockerMachine
	I0729 04:32:50.958887   18743 start.go:293] postStartSetup for "stopped-upgrade-514000" (driver="qemu2")
	I0729 04:32:50.958894   18743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 04:32:50.958947   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 04:32:50.958956   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	I0729 04:32:50.994215   18743 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 04:32:50.995348   18743 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 04:32:50.995355   18743 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19341-15486/.minikube/addons for local assets ...
	I0729 04:32:50.995444   18743 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19341-15486/.minikube/files for local assets ...
	I0729 04:32:50.995563   18743 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0729 04:32:50.995694   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 04:32:50.998509   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0729 04:32:51.005196   18743 start.go:296] duration metric: took 46.305166ms for postStartSetup
	I0729 04:32:51.005210   18743 fix.go:56] duration metric: took 21.096203833s for fixHost
	I0729 04:32:51.005243   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:51.005343   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:51.005347   18743 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 04:32:51.067678   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252771.301786004
	
	I0729 04:32:51.067686   18743 fix.go:216] guest clock: 1722252771.301786004
	I0729 04:32:51.067690   18743 fix.go:229] Guest: 2024-07-29 04:32:51.301786004 -0700 PDT Remote: 2024-07-29 04:32:51.005212 -0700 PDT m=+21.211089834 (delta=296.574004ms)
	I0729 04:32:51.067700   18743 fix.go:200] guest clock delta is within tolerance: 296.574004ms
	I0729 04:32:51.067703   18743 start.go:83] releasing machines lock for "stopped-upgrade-514000", held for 21.158709542s
	I0729 04:32:51.067758   18743 ssh_runner.go:195] Run: cat /version.json
	I0729 04:32:51.067772   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	I0729 04:32:51.067761   18743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 04:32:51.067810   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	W0729 04:32:51.068307   18743 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53329: connect: connection refused
	I0729 04:32:51.068328   18743 retry.go:31] will retry after 140.777815ms: dial tcp [::1]:53329: connect: connection refused
	W0729 04:32:51.248954   18743 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 04:32:51.249069   18743 ssh_runner.go:195] Run: systemctl --version
	I0729 04:32:51.252094   18743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 04:32:51.254728   18743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 04:32:51.254770   18743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 04:32:51.259274   18743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 04:32:51.265824   18743 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 04:32:51.265834   18743 start.go:495] detecting cgroup driver to use...
	I0729 04:32:51.265917   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:32:51.275539   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 04:32:51.279230   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 04:32:51.282531   18743 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 04:32:51.282554   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 04:32:51.285655   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:32:51.288430   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 04:32:51.291414   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:32:51.294801   18743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 04:32:51.298346   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 04:32:51.301505   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 04:32:51.304280   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 04:32:51.307436   18743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 04:32:51.310567   18743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 04:32:51.313322   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:51.393888   18743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 04:32:51.399764   18743 start.go:495] detecting cgroup driver to use...
	I0729 04:32:51.399840   18743 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 04:32:51.406122   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:32:51.411052   18743 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 04:32:51.418820   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:32:51.423560   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:32:51.428094   18743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 04:32:51.472715   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:32:51.477855   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:32:51.483205   18743 ssh_runner.go:195] Run: which cri-dockerd
	I0729 04:32:51.484506   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 04:32:51.487093   18743 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 04:32:51.491900   18743 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 04:32:51.569352   18743 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 04:32:51.647789   18743 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 04:32:51.647853   18743 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 04:32:51.653728   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:51.737429   18743 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:32:52.892511   18743 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15509225s)
	I0729 04:32:52.892573   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 04:32:52.897336   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:32:52.901372   18743 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 04:32:52.989197   18743 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 04:32:53.073178   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:53.148512   18743 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 04:32:53.154233   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:32:53.159408   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:53.243581   18743 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 04:32:53.283181   18743 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 04:32:53.283254   18743 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 04:32:53.286550   18743 start.go:563] Will wait 60s for crictl version
	I0729 04:32:53.286603   18743 ssh_runner.go:195] Run: which crictl
	I0729 04:32:53.288134   18743 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 04:32:53.304548   18743 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 04:32:53.304624   18743 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:32:53.321233   18743 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:32:53.342753   18743 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 04:32:53.342877   18743 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 04:32:53.344354   18743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 04:32:53.347876   18743 kubeadm.go:883] updating cluster {Name:stopped-upgrade-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53363 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 04:32:53.347921   18743 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:32:53.347958   18743 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:32:53.358180   18743 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:32:53.358190   18743 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:32:53.358241   18743 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:32:53.361367   18743 ssh_runner.go:195] Run: which lz4
	I0729 04:32:53.362651   18743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 04:32:53.363937   18743 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 04:32:53.363948   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 04:32:54.273472   18743 docker.go:649] duration metric: took 910.872417ms to copy over tarball
	I0729 04:32:54.273554   18743 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 04:32:54.506808   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:32:55.430392   18743 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.156851917s)
	I0729 04:32:55.430411   18743 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 04:32:55.446475   18743 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:32:55.449916   18743 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 04:32:55.455184   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:55.537388   18743 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:32:57.145505   18743 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.608139209s)
	I0729 04:32:57.145607   18743 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:32:57.159068   18743 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:32:57.159077   18743 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:32:57.159082   18743 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 04:32:57.163502   18743 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:32:57.165356   18743 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:32:57.167408   18743 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:32:57.167595   18743 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:32:57.169460   18743 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:32:57.169554   18743 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:32:57.171059   18743 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:32:57.171079   18743 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:32:57.172146   18743 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 04:32:57.172212   18743 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:32:57.173506   18743 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:32:57.173509   18743 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:32:57.174332   18743 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:32:57.174859   18743 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 04:32:57.176054   18743 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:32:57.176633   18743 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:32:57.580018   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:32:57.592423   18743 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 04:32:57.592447   18743 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:32:57.592502   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:32:57.592994   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:32:57.602117   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:32:57.609119   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 04:32:57.609898   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 04:32:57.616707   18743 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 04:32:57.616729   18743 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:32:57.616783   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:32:57.618797   18743 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 04:32:57.618809   18743 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:32:57.618836   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:32:57.620879   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 04:32:57.624162   18743 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 04:32:57.624179   18743 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:32:57.624218   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 04:32:57.635764   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 04:32:57.644967   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 04:32:57.645002   18743 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 04:32:57.645016   18743 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 04:32:57.645063   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 04:32:57.647434   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 04:32:57.655511   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0729 04:32:57.655609   18743 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 04:32:57.655627   18743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 04:32:57.655706   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:32:57.666141   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:32:57.666147   18743 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 04:32:57.666159   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 04:32:57.666173   18743 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 04:32:57.666187   18743 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:32:57.666215   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:32:57.673375   18743 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 04:32:57.673387   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 04:32:57.689996   18743 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 04:32:57.690027   18743 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:32:57.690086   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:32:57.690095   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 04:32:57.690188   18743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:32:57.717634   18743 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 04:32:57.722125   18743 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 04:32:57.722165   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 04:32:57.722213   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 04:32:57.760079   18743 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:32:57.760102   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0729 04:32:57.777948   18743 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 04:32:57.778070   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:32:57.805486   18743 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 04:32:57.805528   18743 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 04:32:57.805547   18743 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:32:57.805602   18743 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:32:57.821270   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 04:32:57.821376   18743 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:32:57.822676   18743 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 04:32:57.822688   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 04:32:57.852315   18743 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:32:57.852329   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 04:32:58.095287   18743 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 04:32:58.095328   18743 cache_images.go:92] duration metric: took 936.263042ms to LoadCachedImages
	W0729 04:32:58.095365   18743 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 04:32:58.095371   18743 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 04:32:58.095422   18743 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-514000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 04:32:58.095487   18743 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 04:32:58.109231   18743 cni.go:84] Creating CNI manager for ""
	I0729 04:32:58.109243   18743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:32:58.109247   18743 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 04:32:58.109258   18743 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-514000 NodeName:stopped-upgrade-514000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 04:32:58.109321   18743 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-514000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 04:32:58.109373   18743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 04:32:58.112177   18743 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 04:32:58.112211   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 04:32:58.115005   18743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 04:32:58.120105   18743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 04:32:58.124898   18743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 04:32:58.129939   18743 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 04:32:58.131191   18743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 04:32:58.135194   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:58.212601   18743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:32:58.217817   18743 certs.go:68] Setting up /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000 for IP: 10.0.2.15
	I0729 04:32:58.217824   18743 certs.go:194] generating shared ca certs ...
	I0729 04:32:58.217832   18743 certs.go:226] acquiring lock for ca certs: {Name:mkdf1894d8f9d5e3cc3aa4d0030f6ecce44e63f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:32:58.217990   18743 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.key
	I0729 04:32:58.218040   18743 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/proxy-client-ca.key
	I0729 04:32:58.218049   18743 certs.go:256] generating profile certs ...
	I0729 04:32:58.218126   18743 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/client.key
	I0729 04:32:58.218144   18743 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key.6bbbaa9e
	I0729 04:32:58.218152   18743 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt.6bbbaa9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 04:32:58.263911   18743 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt.6bbbaa9e ...
	I0729 04:32:58.263935   18743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt.6bbbaa9e: {Name:mk4226757e478e05e8081a6bd878cc84b87db3ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:32:58.264324   18743 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key.6bbbaa9e ...
	I0729 04:32:58.264333   18743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key.6bbbaa9e: {Name:mk9a6a66f7f3c7a6e0dd1d2799911a4a1764b4a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:32:58.264474   18743 certs.go:381] copying /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt.6bbbaa9e -> /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt
	I0729 04:32:58.264625   18743 certs.go:385] copying /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key.6bbbaa9e -> /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key
	I0729 04:32:58.264790   18743 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/proxy-client.key
	I0729 04:32:58.264926   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/15973.pem (1338 bytes)
	W0729 04:32:58.264956   18743 certs.go:480] ignoring /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0729 04:32:58.264961   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 04:32:58.264980   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem (1078 bytes)
	I0729 04:32:58.264997   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem (1123 bytes)
	I0729 04:32:58.265015   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/key.pem (1675 bytes)
	I0729 04:32:58.265053   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0729 04:32:58.265417   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 04:32:58.272374   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 04:32:58.279190   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 04:32:58.286504   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 04:32:58.294473   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 04:32:58.301244   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 04:32:58.308225   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 04:32:58.315265   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 04:32:58.322603   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 04:32:58.329618   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0729 04:32:58.336238   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0729 04:32:58.342953   18743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 04:32:58.348188   18743 ssh_runner.go:195] Run: openssl version
	I0729 04:32:58.349904   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 04:32:58.352696   18743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:32:58.354022   18743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:32:58.354040   18743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:32:58.355809   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 04:32:58.359057   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0729 04:32:58.362443   18743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0729 04:32:58.363929   18743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:17 /usr/share/ca-certificates/15973.pem
	I0729 04:32:58.363949   18743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0729 04:32:58.365845   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0729 04:32:58.368609   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0729 04:32:58.371773   18743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0729 04:32:58.373227   18743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:17 /usr/share/ca-certificates/159732.pem
	I0729 04:32:58.373250   18743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0729 04:32:58.374931   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 04:32:58.378255   18743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 04:32:58.379817   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 04:32:58.382052   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 04:32:58.383894   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 04:32:58.385822   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 04:32:58.387608   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 04:32:58.389378   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 04:32:58.391082   18743 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53363 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:32:58.391151   18743 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:32:58.401210   18743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 04:32:58.404516   18743 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 04:32:58.404522   18743 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 04:32:58.404543   18743 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 04:32:58.407319   18743 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:32:58.407633   18743 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-514000" does not appear in /Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:32:58.407732   18743 kubeconfig.go:62] /Users/jenkins/minikube-integration/19341-15486/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-514000" cluster setting kubeconfig missing "stopped-upgrade-514000" context setting]
	I0729 04:32:58.407927   18743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/kubeconfig: {Name:mk01c5aa9060b104010e51a5796278cdf7a7a206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:32:58.408550   18743 kapi.go:59] client config for stopped-upgrade-514000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/client.key", CAFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060b8080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:32:58.408882   18743 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 04:32:58.411517   18743 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-514000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 04:32:58.411523   18743 kubeadm.go:1160] stopping kube-system containers ...
	I0729 04:32:58.411565   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:32:58.423405   18743 docker.go:483] Stopping containers: [fb1260acc22b d3755a4fce21 c0c4385482f6 f6ecb8618d59 36af8e90410c 565a0b2bf32c 43bffe5a5082 dfd3430538d4]
	I0729 04:32:58.423467   18743 ssh_runner.go:195] Run: docker stop fb1260acc22b d3755a4fce21 c0c4385482f6 f6ecb8618d59 36af8e90410c 565a0b2bf32c 43bffe5a5082 dfd3430538d4
	I0729 04:32:58.434163   18743 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 04:32:58.439506   18743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:32:58.442769   18743 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:32:58.442780   18743 kubeadm.go:157] found existing configuration files:
	
	I0729 04:32:58.442804   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/admin.conf
	I0729 04:32:58.445726   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:32:58.445755   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:32:58.448312   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/kubelet.conf
	I0729 04:32:58.450980   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:32:58.451006   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:32:58.453988   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/controller-manager.conf
	I0729 04:32:58.456444   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:32:58.456462   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:32:58.459116   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/scheduler.conf
	I0729 04:32:58.462079   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:32:58.462102   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:32:58.464643   18743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:32:58.467340   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:32:58.489698   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:32:58.843284   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:32:58.968689   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:32:58.995658   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:32:59.015900   18743 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:32:59.015983   18743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:32:59.518027   18743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:32:59.508918   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:32:59.509077   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:32:59.530440   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:32:59.530509   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:32:59.546395   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:32:59.546469   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:32:59.558708   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:32:59.558784   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:32:59.571439   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:32:59.571517   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:32:59.582658   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:32:59.582738   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:32:59.594593   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:32:59.594667   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:32:59.613052   18178 logs.go:276] 0 containers: []
	W0729 04:32:59.613066   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:32:59.613135   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:32:59.629493   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:32:59.629513   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:32:59.629520   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:32:59.643307   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:32:59.643320   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:32:59.657335   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:32:59.657348   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:32:59.673779   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:32:59.673791   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:32:59.709382   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:32:59.709395   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:32:59.724663   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:32:59.724674   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:32:59.740207   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:32:59.740220   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:32:59.759274   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:32:59.759287   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:32:59.772949   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:32:59.772962   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:32:59.785441   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:32:59.785456   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:32:59.799659   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:32:59.799672   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:32:59.840137   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:32:59.840153   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:32:59.845140   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:32:59.845151   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:32:59.861989   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:32:59.862012   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:32:59.886405   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:32:59.886429   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:32:59.902461   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:32:59.902477   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:32:59.933101   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:32:59.933130   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:33:00.018044   18743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:33:00.022473   18743 api_server.go:72] duration metric: took 1.006598167s to wait for apiserver process to appear ...
	I0729 04:33:00.022482   18743 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:33:00.022493   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:02.450164   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:05.023889   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:05.023955   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:07.452271   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:07.452444   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:33:07.469648   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:33:07.469743   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:33:07.482861   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:33:07.482930   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:33:07.494142   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:33:07.494213   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:33:07.509186   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:33:07.509255   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:33:07.519840   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:33:07.519905   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:33:07.530753   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:33:07.530817   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:33:07.542832   18178 logs.go:276] 0 containers: []
	W0729 04:33:07.542846   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:33:07.542903   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:33:07.553508   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:33:07.553526   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:33:07.553531   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:33:07.567774   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:33:07.567786   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:33:07.584979   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:33:07.584990   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:33:07.596765   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:33:07.596779   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:33:07.634075   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:33:07.634088   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:33:07.638443   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:33:07.638451   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:33:07.663982   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:33:07.663998   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:33:07.675690   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:33:07.675701   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:33:07.712947   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:33:07.712959   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:33:07.728088   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:33:07.728099   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:33:07.744104   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:33:07.744115   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:33:07.757308   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:33:07.757320   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:33:07.775433   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:33:07.775445   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:33:07.786784   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:33:07.786797   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:33:07.801711   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:33:07.801721   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:33:07.817012   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:33:07.817024   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:33:07.828590   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:33:07.828602   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:33:10.355057   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:10.024356   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:10.024409   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:15.356799   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:15.356989   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:33:15.376557   18178 logs.go:276] 2 containers: [6c08ba5d3da1 da7fecfce787]
	I0729 04:33:15.376657   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:33:15.390906   18178 logs.go:276] 2 containers: [67adfb5f130b b25546feb08e]
	I0729 04:33:15.390989   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:33:15.402225   18178 logs.go:276] 1 containers: [7d8d587b96b1]
	I0729 04:33:15.402302   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:33:15.417240   18178 logs.go:276] 2 containers: [fb4b7f38a84f 8d522a953404]
	I0729 04:33:15.417311   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:33:15.427904   18178 logs.go:276] 1 containers: [e94bef30402e]
	I0729 04:33:15.427987   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:33:15.438081   18178 logs.go:276] 2 containers: [cc35d6605130 627551587c9d]
	I0729 04:33:15.438150   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:33:15.448037   18178 logs.go:276] 0 containers: []
	W0729 04:33:15.448047   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:33:15.448108   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:33:15.461446   18178 logs.go:276] 2 containers: [0d3f8cead05b a7aef54446de]
	I0729 04:33:15.461465   18178 logs.go:123] Gathering logs for kube-apiserver [6c08ba5d3da1] ...
	I0729 04:33:15.461471   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c08ba5d3da1"
	I0729 04:33:15.475154   18178 logs.go:123] Gathering logs for kube-apiserver [da7fecfce787] ...
	I0729 04:33:15.475168   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da7fecfce787"
	I0729 04:33:15.500481   18178 logs.go:123] Gathering logs for kube-controller-manager [cc35d6605130] ...
	I0729 04:33:15.500494   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc35d6605130"
	I0729 04:33:15.527382   18178 logs.go:123] Gathering logs for storage-provisioner [a7aef54446de] ...
	I0729 04:33:15.527395   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7aef54446de"
	I0729 04:33:15.542285   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:33:15.542298   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:33:15.554141   18178 logs.go:123] Gathering logs for etcd [67adfb5f130b] ...
	I0729 04:33:15.554152   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 67adfb5f130b"
	I0729 04:33:15.568497   18178 logs.go:123] Gathering logs for kube-scheduler [fb4b7f38a84f] ...
	I0729 04:33:15.568511   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb4b7f38a84f"
	I0729 04:33:15.585724   18178 logs.go:123] Gathering logs for kube-scheduler [8d522a953404] ...
	I0729 04:33:15.585738   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d522a953404"
	I0729 04:33:15.601530   18178 logs.go:123] Gathering logs for kube-proxy [e94bef30402e] ...
	I0729 04:33:15.601543   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94bef30402e"
	I0729 04:33:15.613209   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:33:15.613220   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:33:15.649763   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:33:15.649772   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:33:15.685885   18178 logs.go:123] Gathering logs for etcd [b25546feb08e] ...
	I0729 04:33:15.685896   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b25546feb08e"
	I0729 04:33:15.699485   18178 logs.go:123] Gathering logs for coredns [7d8d587b96b1] ...
	I0729 04:33:15.699495   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d8d587b96b1"
	I0729 04:33:15.711393   18178 logs.go:123] Gathering logs for storage-provisioner [0d3f8cead05b] ...
	I0729 04:33:15.711406   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d3f8cead05b"
	I0729 04:33:15.723375   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:33:15.723386   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:33:15.728309   18178 logs.go:123] Gathering logs for kube-controller-manager [627551587c9d] ...
	I0729 04:33:15.728317   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 627551587c9d"
	I0729 04:33:15.742862   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:33:15.742872   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:33:15.024719   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:15.024766   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:18.269405   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:23.271492   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:23.271529   18178 kubeadm.go:597] duration metric: took 4m4.227687458s to restartPrimaryControlPlane
	W0729 04:33:23.271563   18178 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 04:33:23.271591   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 04:33:24.217554   18178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 04:33:24.222603   18178 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:33:24.225390   18178 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:33:24.227981   18178 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:33:24.227987   18178 kubeadm.go:157] found existing configuration files:
	
	I0729 04:33:24.228008   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/admin.conf
	I0729 04:33:24.230562   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:33:24.230587   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:33:24.233078   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/kubelet.conf
	I0729 04:33:24.235781   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:33:24.235802   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:33:24.238775   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/controller-manager.conf
	I0729 04:33:24.241264   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:33:24.241284   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:33:24.243989   18178 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/scheduler.conf
	I0729 04:33:24.247030   18178 kubeadm.go:163] "https://control-plane.minikube.internal:53139" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53139 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:33:24.247053   18178 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:33:24.249719   18178 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 04:33:24.267140   18178 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 04:33:24.267171   18178 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 04:33:24.314720   18178 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 04:33:24.314780   18178 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 04:33:24.314848   18178 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 04:33:24.363457   18178 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 04:33:20.025140   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:20.025183   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:24.367546   18178 out.go:204]   - Generating certificates and keys ...
	I0729 04:33:24.367593   18178 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 04:33:24.367625   18178 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 04:33:24.367669   18178 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 04:33:24.367703   18178 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 04:33:24.367743   18178 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 04:33:24.367773   18178 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 04:33:24.367806   18178 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 04:33:24.367839   18178 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 04:33:24.367876   18178 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 04:33:24.367909   18178 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 04:33:24.367927   18178 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 04:33:24.367955   18178 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 04:33:24.492064   18178 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 04:33:24.681792   18178 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 04:33:24.751412   18178 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 04:33:24.786224   18178 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 04:33:24.816612   18178 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 04:33:24.816967   18178 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 04:33:24.817089   18178 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 04:33:24.895716   18178 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 04:33:24.898664   18178 out.go:204]   - Booting up control plane ...
	I0729 04:33:24.898747   18178 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 04:33:24.898792   18178 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 04:33:24.899248   18178 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 04:33:24.899294   18178 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 04:33:24.899375   18178 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 04:33:25.025577   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:25.025598   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:29.402778   18178 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.504306 seconds
	I0729 04:33:29.402844   18178 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 04:33:29.407090   18178 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 04:33:29.924240   18178 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 04:33:29.924574   18178 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-317000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 04:33:30.427806   18178 kubeadm.go:310] [bootstrap-token] Using token: smrxp0.0qq2oz84ss0v9vcx
	I0729 04:33:30.434051   18178 out.go:204]   - Configuring RBAC rules ...
	I0729 04:33:30.434106   18178 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 04:33:30.434148   18178 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 04:33:30.439690   18178 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 04:33:30.440562   18178 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 04:33:30.446045   18178 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 04:33:30.450156   18178 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 04:33:30.453430   18178 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 04:33:30.630542   18178 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 04:33:30.833280   18178 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 04:33:30.833668   18178 kubeadm.go:310] 
	I0729 04:33:30.833699   18178 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 04:33:30.833720   18178 kubeadm.go:310] 
	I0729 04:33:30.833786   18178 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 04:33:30.833816   18178 kubeadm.go:310] 
	I0729 04:33:30.833849   18178 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 04:33:30.833879   18178 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 04:33:30.833903   18178 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 04:33:30.833917   18178 kubeadm.go:310] 
	I0729 04:33:30.834004   18178 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 04:33:30.834039   18178 kubeadm.go:310] 
	I0729 04:33:30.834067   18178 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 04:33:30.834075   18178 kubeadm.go:310] 
	I0729 04:33:30.834124   18178 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 04:33:30.834185   18178 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 04:33:30.834313   18178 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 04:33:30.834319   18178 kubeadm.go:310] 
	I0729 04:33:30.834359   18178 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 04:33:30.834401   18178 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 04:33:30.834408   18178 kubeadm.go:310] 
	I0729 04:33:30.834499   18178 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token smrxp0.0qq2oz84ss0v9vcx \
	I0729 04:33:30.834586   18178 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:61250418a92f64cc21f880dcd095606f8607c1c11d80f25df99b7d542aabf8c2 \
	I0729 04:33:30.834620   18178 kubeadm.go:310] 	--control-plane 
	I0729 04:33:30.834624   18178 kubeadm.go:310] 
	I0729 04:33:30.834664   18178 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 04:33:30.834679   18178 kubeadm.go:310] 
	I0729 04:33:30.834797   18178 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token smrxp0.0qq2oz84ss0v9vcx \
	I0729 04:33:30.834872   18178 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:61250418a92f64cc21f880dcd095606f8607c1c11d80f25df99b7d542aabf8c2 
	I0729 04:33:30.834979   18178 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 04:33:30.835076   18178 cni.go:84] Creating CNI manager for ""
	I0729 04:33:30.835107   18178 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:33:30.841647   18178 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 04:33:30.849663   18178 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 04:33:30.852530   18178 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 04:33:30.857387   18178 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 04:33:30.857433   18178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 04:33:30.857511   18178 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-317000 minikube.k8s.io/updated_at=2024_07_29T04_33_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=running-upgrade-317000 minikube.k8s.io/primary=true
	I0729 04:33:30.906674   18178 kubeadm.go:1113] duration metric: took 49.282833ms to wait for elevateKubeSystemPrivileges
	I0729 04:33:30.906688   18178 ops.go:34] apiserver oom_adj: -16
	I0729 04:33:30.906693   18178 kubeadm.go:394] duration metric: took 4m11.876413375s to StartCluster
	I0729 04:33:30.906703   18178 settings.go:142] acquiring lock: {Name:mk7d7deaddc5161eee59fbf4fca49f66001c194c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:33:30.906870   18178 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:33:30.907278   18178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/kubeconfig: {Name:mk01c5aa9060b104010e51a5796278cdf7a7a206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:33:30.907499   18178 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:33:30.907510   18178 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 04:33:30.907550   18178 config.go:182] Loaded profile config "running-upgrade-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:33:30.907553   18178 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-317000"
	I0729 04:33:30.907565   18178 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-317000"
	W0729 04:33:30.907568   18178 addons.go:243] addon storage-provisioner should already be in state true
	I0729 04:33:30.907580   18178 host.go:66] Checking if "running-upgrade-317000" exists ...
	I0729 04:33:30.907580   18178 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-317000"
	I0729 04:33:30.907591   18178 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-317000"
	I0729 04:33:30.908411   18178 kapi.go:59] client config for running-upgrade-317000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/running-upgrade-317000/client.key", CAFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101ccc080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:33:30.908532   18178 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-317000"
	W0729 04:33:30.908537   18178 addons.go:243] addon default-storageclass should already be in state true
	I0729 04:33:30.908542   18178 host.go:66] Checking if "running-upgrade-317000" exists ...
	I0729 04:33:30.911585   18178 out.go:177] * Verifying Kubernetes components...
	I0729 04:33:30.911941   18178 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 04:33:30.917952   18178 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 04:33:30.917959   18178 sshutil.go:53] new ssh client: &{IP:localhost Port:53107 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/running-upgrade-317000/id_rsa Username:docker}
	I0729 04:33:30.921527   18178 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:33:30.924560   18178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:33:30.928597   18178 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:33:30.928604   18178 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 04:33:30.928609   18178 sshutil.go:53] new ssh client: &{IP:localhost Port:53107 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/running-upgrade-317000/id_rsa Username:docker}
	I0729 04:33:31.018382   18178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:33:31.023832   18178 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:33:31.023876   18178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:33:31.028646   18178 api_server.go:72] duration metric: took 121.139584ms to wait for apiserver process to appear ...
	I0729 04:33:31.028654   18178 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:33:31.028660   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:31.042006   18178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 04:33:31.065692   18178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:33:30.026119   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:30.026167   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:36.030723   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:36.030807   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:35.027220   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:35.027283   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:41.031335   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:41.031377   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:40.028543   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:40.028616   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:46.031781   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:46.031848   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:45.030286   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:45.030331   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:51.032374   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:51.032436   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:50.032239   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:50.032337   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:56.033281   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:56.033338   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:55.034668   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:55.034717   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:01.034297   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:01.034355   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 04:34:01.379261   18178 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 04:34:01.383569   18178 out.go:177] * Enabled addons: storage-provisioner
	I0729 04:34:01.390487   18178 addons.go:510] duration metric: took 30.483747083s for enable addons: enabled=[storage-provisioner]
	I0729 04:34:00.037157   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:00.037552   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:00.062783   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:00.062899   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:00.077789   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:00.077886   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:00.091367   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:00.091461   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:00.103758   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:00.103834   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:00.115045   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:00.115115   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:00.125629   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:00.125701   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:00.139101   18743 logs.go:276] 0 containers: []
	W0729 04:34:00.139114   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:00.139170   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:00.150167   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:00.150184   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:00.150190   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:00.164332   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:00.164344   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:00.194627   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:00.194639   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:00.206405   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:00.206418   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:00.245445   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:00.245456   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:00.257920   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:00.257931   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:00.271070   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:00.271088   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:00.282308   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:00.282320   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:00.298839   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:00.298850   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:00.310922   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:00.310933   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:00.335402   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:00.335412   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:00.361749   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:00.361763   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:00.373559   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:00.373573   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:00.385274   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:00.385287   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:00.389664   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:00.389670   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:00.494010   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:00.494022   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:00.510377   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:00.510387   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:03.028251   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:06.035663   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:06.035721   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:08.029115   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:08.029315   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:08.053135   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:08.053226   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:08.074255   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:08.074330   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:08.084850   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:08.084911   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:08.095498   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:08.095574   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:08.106157   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:08.106226   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:08.116518   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:08.116592   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:08.128561   18743 logs.go:276] 0 containers: []
	W0729 04:34:08.128577   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:08.128633   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:08.139160   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:08.139177   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:08.139183   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:08.143578   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:08.143584   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:08.167733   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:08.167746   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:08.182643   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:08.182654   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:08.203523   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:08.203542   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:08.221574   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:08.221585   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:08.235273   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:08.235284   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:08.251366   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:08.251377   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:08.288613   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:08.288623   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:08.324613   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:08.324626   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:08.338354   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:08.338368   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:08.350207   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:08.350220   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:08.364986   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:08.364996   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:08.377038   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:08.377049   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:08.388968   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:08.388983   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:08.401004   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:08.401018   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:08.412965   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:08.412977   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:11.037530   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:11.037580   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:10.940372   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:16.039895   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:16.039914   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:15.942548   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:15.942739   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:15.958358   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:15.958441   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:15.970883   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:15.970966   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:15.981849   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:15.981928   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:15.992132   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:15.992204   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:16.002550   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:16.002615   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:16.013166   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:16.013237   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:16.023486   18743 logs.go:276] 0 containers: []
	W0729 04:34:16.023497   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:16.023557   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:16.033938   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:16.033954   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:16.033959   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:16.059152   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:16.059162   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:16.070185   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:16.070196   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:16.082188   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:16.082200   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:16.094887   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:16.094899   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:16.109283   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:16.109293   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:16.121766   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:16.121779   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:16.133396   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:16.133407   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:16.149011   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:16.149022   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:16.183360   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:16.183371   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:16.188072   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:16.188079   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:16.208332   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:16.208342   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:16.227856   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:16.227866   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:16.239624   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:16.239638   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:16.257962   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:16.257972   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:16.269288   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:16.269300   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:16.293404   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:16.293414   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:18.833164   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:21.041220   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:21.041273   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:23.835247   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:23.835409   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:23.847534   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:23.847607   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:23.857966   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:23.858027   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:23.868277   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:23.868339   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:23.878783   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:23.878846   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:23.889263   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:23.889325   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:23.899901   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:23.899963   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:23.911772   18743 logs.go:276] 0 containers: []
	W0729 04:34:23.911784   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:23.911839   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:23.922627   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:23.922641   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:23.922646   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:23.936746   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:23.936759   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:23.955888   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:23.955899   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:23.966600   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:23.966609   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:23.990371   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:23.990378   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:24.002046   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:24.002054   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:24.038554   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:24.038565   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:24.064182   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:24.064194   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:24.078525   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:24.078538   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:24.093388   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:24.093400   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:24.105230   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:24.105245   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:24.122775   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:24.122787   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:24.127180   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:24.127186   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:24.139000   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:24.139014   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:24.176935   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:24.176950   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:24.188162   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:24.188175   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:24.200779   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:24.200792   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:26.043456   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:26.043502   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:26.714955   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:31.044847   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:31.044968   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:31.056808   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:34:31.056875   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:31.067908   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:34:31.067978   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:31.078489   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:34:31.078564   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:31.089824   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:34:31.089891   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:31.100269   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:34:31.100335   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:31.111448   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:34:31.111516   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:31.121331   18178 logs.go:276] 0 containers: []
	W0729 04:34:31.121345   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:31.121397   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:31.131761   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:34:31.131775   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:34:31.131780   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:34:31.146834   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:34:31.146849   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:34:31.158656   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:34:31.158667   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:34:31.176445   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:31.176456   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:31.201503   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:34:31.201514   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:34:31.213137   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:31.213151   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:31.217425   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:31.217434   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:31.252886   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:34:31.252898   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:34:31.267282   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:34:31.267293   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:34:31.281209   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:34:31.281220   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:34:31.292509   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:34:31.292522   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:34:31.303854   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:34:31.303868   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:31.315125   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:31.315135   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:31.717288   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:31.717475   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:31.738580   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:31.738676   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:31.761845   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:31.761924   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:31.778441   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:31.778505   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:31.789031   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:31.789096   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:31.802908   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:31.802968   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:31.819385   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:31.819456   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:31.829965   18743 logs.go:276] 0 containers: []
	W0729 04:34:31.829980   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:31.830032   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:31.840948   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:31.840969   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:31.840974   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:31.855683   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:31.855698   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:31.868991   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:31.869005   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:31.882235   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:31.882245   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:31.893791   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:31.893802   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:31.917015   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:31.917023   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:31.928561   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:31.928571   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:31.952141   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:31.952150   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:31.986280   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:31.986291   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:32.010502   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:32.010514   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:32.014690   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:32.014698   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:32.028333   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:32.028344   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:32.041887   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:32.041899   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:32.057085   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:32.057098   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:32.095384   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:32.095395   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:32.107317   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:32.107333   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:32.124973   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:32.124984   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:34.638476   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:33.854512   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:39.640245   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:39.640438   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:39.665843   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:39.665965   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:39.684577   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:39.684657   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:39.697770   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:39.697848   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:39.713360   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:39.713432   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:39.723530   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:39.723602   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:39.738050   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:39.738122   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:39.747990   18743 logs.go:276] 0 containers: []
	W0729 04:34:39.748002   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:39.748056   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:39.758669   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:39.758692   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:39.758697   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:39.771538   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:39.771550   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:39.783242   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:39.783255   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:39.795596   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:39.795608   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:38.856670   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:38.856772   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:38.872139   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:34:38.872216   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:38.888228   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:34:38.888293   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:38.898584   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:34:38.898650   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:38.908794   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:34:38.908870   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:38.919686   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:34:38.919747   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:38.930130   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:34:38.930192   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:38.940251   18178 logs.go:276] 0 containers: []
	W0729 04:34:38.940262   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:38.940315   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:38.950554   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:34:38.950572   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:34:38.950578   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:34:38.961592   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:34:38.961604   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:34:38.973074   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:34:38.973085   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:34:38.984827   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:34:38.984836   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:38.995970   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:38.995983   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:39.033674   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:39.033685   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:39.037976   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:34:39.037985   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:34:39.052210   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:34:39.052221   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:34:39.068654   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:39.068668   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:39.092727   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:39.092738   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:39.128763   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:34:39.128775   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:34:39.145461   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:34:39.145473   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:34:39.163910   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:34:39.163920   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:34:41.677585   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:39.820145   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:39.820159   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:39.857458   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:39.857471   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:39.872895   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:39.872906   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:39.885005   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:39.885017   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:39.916183   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:39.916194   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:39.930331   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:39.930346   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:39.942755   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:39.942774   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:39.962251   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:39.962261   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:39.966237   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:39.966244   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:40.007359   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:40.007371   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:40.021721   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:40.021732   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:40.040494   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:40.040505   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:40.052013   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:40.052024   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:42.566270   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:46.680110   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:46.680499   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:46.718792   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:34:46.718930   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:46.744572   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:34:46.744669   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:46.758765   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:34:46.758847   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:46.770378   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:34:46.770447   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:46.792850   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:34:46.792919   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:46.805794   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:34:46.805897   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:46.816452   18178 logs.go:276] 0 containers: []
	W0729 04:34:46.816463   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:46.816525   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:46.826926   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:34:46.826938   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:46.826944   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:46.872651   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:46.872668   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:46.877675   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:46.877686   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:46.913617   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:34:46.913630   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:34:46.927743   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:34:46.927755   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:34:46.941542   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:34:46.941554   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:34:46.953236   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:34:46.953248   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:34:46.967714   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:34:46.967727   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:46.979252   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:34:46.979264   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:34:46.991286   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:34:46.991297   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:34:47.003408   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:34:47.003420   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:34:47.024540   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:34:47.024553   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:34:47.036697   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:47.036707   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:47.568766   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:47.568973   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:47.591222   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:47.591335   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:47.605969   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:47.606046   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:47.618497   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:47.618564   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:47.629197   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:47.629263   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:47.640509   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:47.640572   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:47.651312   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:47.651380   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:47.662461   18743 logs.go:276] 0 containers: []
	W0729 04:34:47.662472   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:47.662532   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:47.673198   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:47.673216   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:47.673222   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:47.687053   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:47.687064   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:47.699281   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:47.699293   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:47.716799   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:47.716813   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:47.729580   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:47.729592   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:47.767268   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:47.767276   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:47.771999   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:47.772009   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:47.809278   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:47.809290   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:47.821186   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:47.821196   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:47.836313   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:47.836321   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:47.850499   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:47.850509   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:47.864027   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:47.864043   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:47.894647   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:47.894667   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:47.919640   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:47.919651   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:47.933860   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:47.933872   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:47.945140   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:47.945152   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:47.956729   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:47.956741   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:49.563770   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:50.473848   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:54.566006   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:54.566209   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:54.594577   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:34:54.594698   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:54.613829   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:34:54.613908   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:54.627895   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:34:54.627970   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:54.639597   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:34:54.639658   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:54.656775   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:34:54.656841   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:54.667447   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:34:54.667510   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:54.677686   18178 logs.go:276] 0 containers: []
	W0729 04:34:54.677698   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:54.677750   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:54.687931   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:34:54.687947   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:54.687952   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:54.725634   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:54.725646   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:54.762159   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:34:54.762173   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:34:54.773708   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:34:54.773719   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:34:54.785695   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:34:54.785706   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:34:54.800785   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:34:54.800796   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:34:54.818892   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:34:54.818906   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:34:54.830394   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:34:54.830409   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:54.842262   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:54.842273   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:54.847181   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:34:54.847190   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:34:54.867383   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:34:54.867393   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:34:54.885414   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:34:54.885428   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:34:54.897216   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:54.897230   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:55.475954   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:55.476110   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:55.490996   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:55.491069   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:55.504682   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:55.504799   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:55.515853   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:55.515922   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:55.526510   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:55.526570   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:55.537037   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:55.537099   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:55.551232   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:55.551306   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:55.561440   18743 logs.go:276] 0 containers: []
	W0729 04:34:55.561452   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:55.561504   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:55.571621   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:55.571640   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:55.571647   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:55.589533   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:55.589544   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:55.605152   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:55.605163   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:55.616897   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:55.616911   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:55.630307   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:55.630322   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:55.651935   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:55.651947   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:55.677796   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:55.677808   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:55.715175   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:55.715185   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:55.751620   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:55.751632   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:55.778220   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:55.778233   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:55.789907   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:55.789919   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:55.802290   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:55.802301   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:55.808555   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:55.808564   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:55.823150   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:55.823164   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:55.834373   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:55.834385   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:55.847303   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:55.847315   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:55.863034   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:55.863044   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:58.382205   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:57.422592   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:03.384442   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:03.384615   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:03.405366   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:03.405448   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:03.418811   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:03.418876   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:03.429636   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:03.429705   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:03.439900   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:03.439964   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:03.450821   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:03.450887   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:03.461273   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:03.461339   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:03.471327   18743 logs.go:276] 0 containers: []
	W0729 04:35:03.471340   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:03.471396   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:03.485968   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:03.485985   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:03.485991   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:03.497423   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:03.497435   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:03.522080   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:03.522091   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:03.559689   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:03.559700   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:03.573293   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:03.573303   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:03.595290   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:03.595303   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:03.610127   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:03.610139   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:03.621544   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:03.621554   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:03.638981   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:03.638995   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:03.651299   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:03.651310   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:03.663091   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:03.663106   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:03.667656   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:03.667663   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:03.692149   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:03.692160   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:03.703675   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:03.703689   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:03.718645   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:03.718655   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:03.732083   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:03.732095   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:03.769025   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:03.769036   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:02.424725   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:02.424925   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:02.446424   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:02.446529   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:02.462598   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:02.462673   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:02.476196   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:02.476265   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:02.486586   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:02.486650   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:02.500489   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:02.500556   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:02.510808   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:02.510872   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:02.531030   18178 logs.go:276] 0 containers: []
	W0729 04:35:02.531043   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:02.531101   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:02.542031   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:02.542049   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:02.542055   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:02.556065   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:02.556079   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:02.567080   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:02.567091   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:02.582531   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:02.582545   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:02.596802   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:02.596812   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:02.614447   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:02.614457   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:02.626052   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:02.626061   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:02.650881   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:02.650888   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:02.686064   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:02.686077   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:02.691172   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:02.691181   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:02.708229   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:02.708239   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:02.720904   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:02.720920   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:02.732676   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:02.732692   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:05.270206   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:06.285934   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:10.271383   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:10.271584   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:10.291667   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:10.291760   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:10.313686   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:10.313757   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:10.324835   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:10.324896   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:10.335022   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:10.335085   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:10.345952   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:10.346018   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:10.356218   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:10.356290   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:10.366452   18178 logs.go:276] 0 containers: []
	W0729 04:35:10.366465   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:10.366532   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:10.376717   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:10.376733   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:10.376738   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:10.387972   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:10.387981   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:10.424003   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:10.424012   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:10.428151   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:10.428160   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:10.442020   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:10.442031   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:10.455675   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:10.455686   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:10.466872   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:10.466886   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:10.481465   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:10.481476   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:10.498691   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:10.498700   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:10.509968   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:10.509979   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:10.544575   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:10.544586   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:10.556092   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:10.556103   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:10.567368   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:10.567380   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:11.288207   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:11.288363   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:11.309570   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:11.309663   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:11.324820   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:11.324901   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:11.337474   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:11.337547   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:11.348571   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:11.348651   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:11.359348   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:11.359414   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:11.370120   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:11.370189   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:11.380430   18743 logs.go:276] 0 containers: []
	W0729 04:35:11.380442   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:11.380501   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:11.391032   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:11.391051   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:11.391057   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:11.404785   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:11.404795   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:11.416195   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:11.416207   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:11.427113   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:11.427122   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:11.439026   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:11.439037   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:11.450810   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:11.450822   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:11.472493   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:11.472504   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:11.485108   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:11.485119   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:11.520887   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:11.520898   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:11.556578   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:11.556589   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:11.580933   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:11.580943   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:11.595253   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:11.595263   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:11.609325   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:11.609336   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:11.621441   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:11.621453   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:11.625604   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:11.625612   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:11.639576   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:11.639585   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:11.655311   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:11.655322   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:14.181845   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:13.092872   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:19.184006   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:19.184114   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:19.195365   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:19.195440   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:19.206257   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:19.206353   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:19.217014   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:19.217084   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:19.227309   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:19.227373   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:19.237967   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:19.238037   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:19.248446   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:19.248514   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:19.258551   18743 logs.go:276] 0 containers: []
	W0729 04:35:19.258562   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:19.258621   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:19.279895   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:19.279913   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:19.279918   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:19.318263   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:19.318281   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:19.346575   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:19.346604   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:19.364045   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:19.364059   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:19.404037   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:19.404048   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:19.418682   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:19.418696   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:19.443950   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:19.443960   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:19.461697   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:19.461711   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:19.473430   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:19.473442   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:19.488393   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:19.488405   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:19.502426   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:19.502443   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:19.514006   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:19.514018   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:19.532721   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:19.532734   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:19.544679   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:19.544690   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:19.548535   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:19.548541   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:19.560390   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:19.560400   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:19.572567   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:19.572578   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:18.095164   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:18.095345   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:18.120489   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:18.120572   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:18.133864   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:18.133936   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:18.145351   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:18.145412   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:18.155623   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:18.155687   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:18.168717   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:18.168781   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:18.180029   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:18.180085   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:18.190038   18178 logs.go:276] 0 containers: []
	W0729 04:35:18.190051   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:18.190102   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:18.207762   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:18.207777   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:18.207783   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:18.212181   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:18.212190   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:18.223307   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:18.223318   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:18.235680   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:18.235693   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:18.248341   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:18.248355   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:18.285971   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:18.285984   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:18.300424   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:18.300438   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:18.314824   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:18.314839   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:18.330523   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:18.330537   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:18.347072   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:18.347086   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:18.364553   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:18.364565   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:18.376417   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:18.376427   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:18.399369   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:18.399377   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:20.934766   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:22.085903   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:25.937033   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:25.937272   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:25.955592   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:25.955676   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:25.972807   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:25.972882   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:25.987358   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:25.987423   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:25.998022   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:25.998097   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:26.008538   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:26.008610   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:26.018986   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:26.019052   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:26.029113   18178 logs.go:276] 0 containers: []
	W0729 04:35:26.029125   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:26.029180   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:26.039510   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:26.039525   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:26.039530   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:26.050520   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:26.050533   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:26.062321   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:26.062332   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:26.079799   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:26.079812   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:26.091428   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:26.091440   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:26.129394   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:26.129413   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:26.134569   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:26.134576   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:26.171775   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:26.171787   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:26.186153   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:26.186164   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:26.202236   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:26.202247   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:26.213727   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:26.213739   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:26.228380   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:26.228390   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:26.253030   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:26.253037   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:27.088035   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:27.088217   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:27.106012   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:27.106108   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:27.121519   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:27.121596   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:27.139607   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:27.139678   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:27.149694   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:27.149768   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:27.160427   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:27.160493   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:27.170800   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:27.170867   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:27.180766   18743 logs.go:276] 0 containers: []
	W0729 04:35:27.180780   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:27.180837   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:27.191509   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:27.191526   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:27.191532   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:27.195795   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:27.195803   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:27.207740   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:27.207753   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:27.226653   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:27.226664   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:27.238968   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:27.238980   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:27.250268   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:27.250279   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:27.262029   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:27.262040   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:27.279830   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:27.279847   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:27.305824   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:27.305835   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:27.324586   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:27.324597   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:27.336102   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:27.336114   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:27.347966   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:27.347976   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:27.371061   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:27.371073   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:27.390331   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:27.390344   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:27.401345   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:27.401356   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:27.440573   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:27.440585   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:27.476591   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:27.476604   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:28.766882   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:29.991971   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:33.769106   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:33.769370   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:33.793371   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:33.793471   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:33.809710   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:33.809787   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:33.822417   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:33.822492   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:33.833136   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:33.833203   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:33.844122   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:33.844194   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:33.854866   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:33.854936   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:33.865094   18178 logs.go:276] 0 containers: []
	W0729 04:35:33.865105   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:33.865160   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:33.875294   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:33.875308   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:33.875314   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:33.887569   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:33.887583   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:33.905303   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:33.905314   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:33.921315   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:33.921324   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:33.932948   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:33.932959   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:33.966842   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:33.966853   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:33.980847   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:33.980861   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:33.992679   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:33.992695   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:34.006961   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:34.006970   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:34.019538   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:34.019549   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:34.044235   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:34.044244   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:34.080977   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:34.080987   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:34.085468   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:34.085475   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:36.600006   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:34.994353   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:34.994674   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:35.027102   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:35.027238   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:35.046424   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:35.046523   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:35.060660   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:35.060736   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:35.074107   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:35.074187   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:35.085005   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:35.085079   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:35.097116   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:35.097185   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:35.108414   18743 logs.go:276] 0 containers: []
	W0729 04:35:35.108427   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:35.108488   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:35.119684   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:35.119702   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:35.119706   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:35.131265   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:35.131276   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:35.145408   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:35.145418   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:35.168726   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:35.168736   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:35.183390   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:35.183401   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:35.200290   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:35.200302   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:35.217243   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:35.217255   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:35.241478   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:35.241492   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:35.253111   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:35.253123   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:35.257620   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:35.257626   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:35.294165   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:35.294180   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:35.309146   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:35.309156   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:35.336618   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:35.336630   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:35.351359   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:35.351370   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:35.366509   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:35.366520   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:35.378570   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:35.378583   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:35.390276   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:35.390286   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:37.929192   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:41.602291   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:41.602513   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:41.625993   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:41.626109   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:41.641982   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:41.642056   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:41.654770   18178 logs.go:276] 2 containers: [87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:41.654838   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:41.665817   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:41.665874   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:41.675951   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:41.676022   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:41.686308   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:41.686373   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:41.696417   18178 logs.go:276] 0 containers: []
	W0729 04:35:41.696430   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:41.696494   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:41.707037   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:41.707055   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:41.707061   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:41.742559   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:41.742566   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:41.746483   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:41.746492   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:41.780563   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:41.780574   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:41.799179   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:41.799190   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:41.813814   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:41.813827   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:41.825263   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:41.825274   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:41.837144   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:41.837158   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:41.851509   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:41.851523   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:41.863583   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:41.863596   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:41.881166   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:41.881178   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:41.893983   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:41.893992   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:41.916880   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:41.916890   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:42.931650   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:42.931832   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:42.944554   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:42.944629   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:42.955799   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:42.955872   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:42.966313   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:42.966381   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:42.977054   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:42.977132   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:42.992515   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:42.992586   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:43.003656   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:43.003728   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:43.013332   18743 logs.go:276] 0 containers: []
	W0729 04:35:43.013352   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:43.013410   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:43.023671   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:43.023697   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:43.023703   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:43.028008   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:43.028017   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:43.052328   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:43.052340   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:43.066065   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:43.066076   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:43.077813   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:43.077827   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:43.095896   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:43.095908   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:43.110198   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:43.110211   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:43.122553   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:43.122565   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:43.162444   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:43.162463   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:43.198715   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:43.198727   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:43.209618   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:43.209627   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:43.222358   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:43.222370   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:43.236711   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:43.236721   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:43.252540   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:43.252550   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:43.268231   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:43.268241   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:43.280530   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:43.280542   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:43.292611   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:43.292623   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:44.430172   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:45.819327   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:49.431595   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:49.431697   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:49.442908   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:49.442978   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:49.455924   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:49.455999   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:49.471666   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:49.471735   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:49.485975   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:49.486046   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:49.496994   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:49.497060   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:49.507761   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:49.507822   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:49.518455   18178 logs.go:276] 0 containers: []
	W0729 04:35:49.518465   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:49.518518   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:49.529079   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:49.529095   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:49.529101   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:49.546953   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:49.546963   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:49.582565   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:35:49.582576   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:35:49.594185   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:49.594196   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:49.605486   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:49.605498   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:49.616963   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:49.616974   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:49.640248   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:49.640256   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:49.651897   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:49.651908   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:49.656885   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:35:49.656893   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:35:49.668855   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:49.668867   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:49.680943   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:49.680955   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:49.693270   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:49.693279   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:49.730478   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:49.730486   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:35:49.746453   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:49.746462   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:49.760637   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:49.760652   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:52.277298   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:50.821531   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:50.821782   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:50.843484   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:50.843588   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:50.863170   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:50.863242   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:50.874767   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:50.874843   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:50.886269   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:50.886344   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:50.896988   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:50.897058   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:50.908173   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:50.908242   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:50.919661   18743 logs.go:276] 0 containers: []
	W0729 04:35:50.919675   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:50.919738   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:50.930462   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:50.930480   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:50.930485   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:50.969839   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:50.969848   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:50.982552   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:50.982565   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:51.001233   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:51.001244   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:51.012857   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:51.012868   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:51.035950   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:51.035957   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:51.049766   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:51.049776   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:51.074206   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:51.074217   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:51.089170   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:51.089181   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:51.100787   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:51.100799   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:51.112636   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:51.112648   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:51.124859   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:51.124872   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:51.166239   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:51.166254   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:51.180764   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:51.180777   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:51.197744   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:51.197756   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:51.210347   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:51.210360   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:51.214623   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:51.214630   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:53.728189   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:57.279633   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:57.279827   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:57.296342   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:35:57.296418   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:57.310979   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:35:57.311040   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:57.322885   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:35:57.322953   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:57.333550   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:35:57.333612   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:57.343977   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:35:57.344036   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:57.354792   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:35:57.354856   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:58.730709   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:58.730909   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:58.747541   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:58.747628   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:58.760974   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:58.761049   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:58.772174   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:58.772246   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:58.783507   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:58.783577   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:58.793901   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:58.793969   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:58.808744   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:58.808819   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:58.819188   18743 logs.go:276] 0 containers: []
	W0729 04:35:58.819201   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:58.819261   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:58.829247   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:58.829267   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:58.829273   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:58.841955   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:58.841967   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:58.860076   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:58.860090   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:58.872589   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:58.872601   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:58.884537   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:58.884547   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:58.900846   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:58.900861   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:58.912072   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:58.912084   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:58.923877   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:58.923893   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:58.961284   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:58.961292   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:58.997444   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:58.997460   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:59.011892   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:59.011907   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:59.034272   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:59.034283   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:59.056898   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:59.056905   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:59.061061   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:59.061067   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:59.086397   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:59.086409   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:59.101102   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:59.101114   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:59.113210   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:59.113221   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:57.365516   18178 logs.go:276] 0 containers: []
	W0729 04:35:57.365527   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:57.365577   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:57.376751   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:35:57.376766   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:35:57.376772   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:35:57.397173   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:57.397185   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:57.421721   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:35:57.421727   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:35:57.433208   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:35:57.433223   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:35:57.445679   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:57.445693   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:57.484332   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:35:57.484346   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:35:57.496538   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:35:57.496552   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:57.508494   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:57.508508   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:57.547887   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:35:57.547898   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:35:57.568369   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:35:57.568381   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:35:57.582322   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:35:57.582334   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:35:57.596828   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:35:57.596840   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:35:57.608551   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:35:57.608563   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:35:57.620110   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:57.620120   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:57.625094   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:35:57.625101   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:00.140607   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:01.630161   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:05.142799   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:05.143031   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:05.161816   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:05.161908   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:05.175784   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:05.175844   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:05.187434   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:05.187506   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:05.201532   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:05.201597   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:05.212143   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:05.212215   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:05.226629   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:05.226720   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:05.236648   18178 logs.go:276] 0 containers: []
	W0729 04:36:05.236659   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:05.236716   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:05.247559   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:05.247575   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:05.247580   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:05.263252   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:05.263262   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:05.278655   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:05.278666   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:05.290292   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:05.290304   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:05.294941   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:05.294950   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:05.330279   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:05.330290   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:05.344932   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:05.344943   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:05.358181   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:05.358191   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:05.393468   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:05.393476   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:05.404871   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:05.404883   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:05.421854   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:05.421864   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:05.440452   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:05.440464   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:05.454878   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:05.454890   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:05.465753   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:05.465763   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:05.480342   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:05.480355   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:06.632534   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:06.632778   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:06.659178   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:06.659290   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:06.677328   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:06.677405   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:06.690351   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:06.690425   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:06.702085   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:06.702161   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:06.713035   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:06.713103   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:06.726106   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:06.726178   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:06.741133   18743 logs.go:276] 0 containers: []
	W0729 04:36:06.741144   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:06.741200   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:06.758142   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:06.758159   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:06.758164   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:06.783079   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:06.783092   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:06.796135   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:06.796145   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:06.808183   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:06.808197   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:06.843825   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:06.843841   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:06.860558   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:06.860575   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:06.872969   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:06.872980   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:06.885272   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:06.885286   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:06.923291   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:06.923304   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:06.938806   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:06.938816   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:06.962580   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:06.962592   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:06.975344   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:06.975358   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:07.000459   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:07.000469   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:07.014860   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:07.014872   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:07.029061   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:07.029073   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:07.040559   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:07.040572   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:07.052218   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:07.052230   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:09.558766   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:08.005859   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:14.560944   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:14.561079   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:14.576515   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:14.576595   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:14.594077   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:14.594146   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:14.604587   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:14.604676   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:14.615970   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:14.616047   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:14.626846   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:14.626918   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:14.638122   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:14.638190   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:14.648410   18743 logs.go:276] 0 containers: []
	W0729 04:36:14.648420   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:14.648473   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:14.659168   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:14.659185   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:14.659191   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:14.670393   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:14.670406   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:14.685722   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:14.685732   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:14.697286   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:14.697298   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:14.711435   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:14.711446   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:14.726914   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:14.726926   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:14.739523   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:14.739537   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:14.750994   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:14.751005   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:14.789459   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:14.789471   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:13.008189   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:13.008615   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:13.049246   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:13.049418   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:13.071420   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:13.071541   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:13.087455   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:13.087535   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:13.100057   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:13.100125   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:13.111422   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:13.111497   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:13.126253   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:13.126322   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:13.137295   18178 logs.go:276] 0 containers: []
	W0729 04:36:13.137306   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:13.137365   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:13.149348   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:13.149368   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:13.149373   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:13.161189   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:13.161203   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:13.173354   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:13.173367   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:13.184891   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:13.184906   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:13.211854   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:13.211865   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:13.223381   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:13.223392   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:13.261371   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:13.261382   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:13.265466   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:13.265474   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:13.279133   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:13.279143   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:13.295191   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:13.295201   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:13.309981   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:13.310038   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:13.322572   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:13.322584   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:13.341373   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:13.341388   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:13.379525   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:13.379537   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:13.394927   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:13.394942   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:15.908168   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:14.819602   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:14.819615   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:14.838287   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:14.838302   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:14.862081   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:14.862089   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:14.875018   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:14.875030   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:14.879747   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:14.879754   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:14.915777   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:14.915789   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:14.930135   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:14.930146   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:14.942114   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:14.942128   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:17.465334   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:20.910544   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:20.910778   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:20.943312   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:20.943405   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:20.959260   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:20.959334   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:20.970999   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:20.971068   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:20.981629   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:20.981694   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:20.992281   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:20.992356   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:21.002625   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:21.002682   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:21.012544   18178 logs.go:276] 0 containers: []
	W0729 04:36:21.012555   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:21.012608   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:21.028658   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:21.028675   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:21.028680   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:21.053252   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:21.053259   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:21.089210   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:21.089222   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:21.100399   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:21.100410   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:21.112255   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:21.112266   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:21.127562   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:21.127573   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:21.139433   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:21.139443   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:21.151582   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:21.151594   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:21.189067   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:21.189076   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:21.203177   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:21.203190   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:21.217184   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:21.217195   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:21.229200   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:21.229212   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:21.246576   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:21.246587   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:21.258710   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:21.258724   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:21.270765   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:21.270776   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:22.467681   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:22.467815   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:22.509347   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:22.509438   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:22.528501   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:22.528574   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:22.550602   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:22.550674   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:22.561900   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:22.561966   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:22.572649   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:22.572720   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:22.582798   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:22.582860   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:22.593317   18743 logs.go:276] 0 containers: []
	W0729 04:36:22.593329   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:22.593382   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:22.603629   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:22.603645   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:22.603650   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:22.608357   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:22.608364   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:22.622692   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:22.622703   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:22.637551   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:22.637562   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:22.651353   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:22.651365   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:22.663169   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:22.663181   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:22.677403   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:22.677414   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:22.702852   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:22.702864   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:22.714800   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:22.714812   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:22.731941   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:22.731953   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:22.744327   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:22.744339   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:22.757832   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:22.757842   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:22.768965   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:22.768976   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:22.792954   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:22.792963   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:22.829562   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:22.829572   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:22.863804   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:22.863815   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:22.882743   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:22.882755   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:23.777690   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:25.396620   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:28.779913   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:28.780192   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:28.805055   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:28.805178   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:28.821078   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:28.821159   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:28.833913   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:28.833993   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:28.844963   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:28.845030   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:28.855222   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:28.855288   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:28.865618   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:28.865687   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:28.875775   18178 logs.go:276] 0 containers: []
	W0729 04:36:28.875789   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:28.875841   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:28.886303   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:28.886321   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:28.886327   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:28.897815   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:28.897826   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:28.909521   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:28.909535   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:28.920976   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:28.920986   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:28.946452   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:28.946465   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:28.958484   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:28.958497   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:28.970169   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:28.970181   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:28.981787   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:28.981799   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:28.993424   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:28.993435   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:28.998339   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:28.998346   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:29.012874   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:29.012889   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:29.048645   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:29.048654   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:29.082633   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:29.082647   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:29.097305   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:29.097319   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:29.122859   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:29.122873   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:31.643623   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:30.397151   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:30.397339   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:30.412773   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:30.412857   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:30.425293   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:30.425353   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:30.436136   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:30.436205   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:30.446863   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:30.446935   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:30.460596   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:30.460670   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:30.478529   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:30.478599   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:30.488389   18743 logs.go:276] 0 containers: []
	W0729 04:36:30.488404   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:30.488459   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:30.498646   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:30.498663   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:30.498669   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:30.536895   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:30.536911   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:30.553510   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:30.553522   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:30.569107   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:30.569118   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:30.580907   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:30.580918   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:30.598397   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:30.598410   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:30.610031   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:30.610041   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:30.622384   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:30.622395   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:30.659658   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:30.659671   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:30.684993   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:30.685007   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:30.706822   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:30.706834   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:30.730239   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:30.730253   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:30.735004   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:30.735011   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:30.749479   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:30.749490   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:30.761433   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:30.761444   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:30.776613   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:30.776625   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:30.792372   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:30.792386   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:33.308742   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:36.645935   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:36.646078   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:36.662152   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:36.662237   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:36.675368   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:36.675443   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:36.686083   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:36.686156   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:36.704001   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:36.704062   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:36.716180   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:36.716257   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:36.726953   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:36.727032   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:36.737155   18178 logs.go:276] 0 containers: []
	W0729 04:36:36.737169   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:36.737223   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:36.747977   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:36.747994   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:36.747999   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:36.768212   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:36.768221   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:36.779476   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:36.779486   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:36.795816   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:36.795826   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:36.810381   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:36.810394   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:36.828186   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:36.828197   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:36.863613   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:36.863624   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:36.877481   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:36.877491   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:36.889370   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:36.889384   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:36.900943   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:36.900952   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:36.905869   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:36.905877   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:36.917627   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:36.917637   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:36.931127   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:36.931138   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:36.943060   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:36.943070   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:36.966823   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:36.966834   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:38.310983   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:38.311166   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:38.327324   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:38.327410   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:38.339941   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:38.340009   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:38.350813   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:38.350889   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:38.369001   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:38.369066   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:38.379382   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:38.379442   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:38.393529   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:38.393599   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:38.403847   18743 logs.go:276] 0 containers: []
	W0729 04:36:38.403860   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:38.403916   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:38.415255   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:38.415272   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:38.415278   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:38.451013   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:38.451029   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:38.464789   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:38.464800   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:38.478552   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:38.478563   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:38.493430   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:38.493444   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:38.505940   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:38.505950   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:38.510312   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:38.510318   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:38.523617   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:38.523629   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:38.534967   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:38.534979   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:38.571620   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:38.571636   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:38.585530   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:38.585543   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:38.609404   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:38.609420   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:38.636807   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:38.636818   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:38.651585   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:38.651600   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:38.663709   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:38.663721   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:38.675911   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:38.675922   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:38.699536   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:38.699547   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:39.504195   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:41.214451   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:44.506318   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:44.506535   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:44.523635   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:44.523715   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:44.537148   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:44.537222   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:44.548747   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:44.548814   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:44.563355   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:44.563420   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:44.573924   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:44.573987   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:44.584582   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:44.584657   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:44.594807   18178 logs.go:276] 0 containers: []
	W0729 04:36:44.594818   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:44.594872   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:44.605420   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:44.605439   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:44.605446   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:44.618757   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:44.618768   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:44.643252   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:44.643259   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:44.680167   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:44.680176   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:44.715441   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:44.715456   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:44.729428   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:44.729439   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:44.734074   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:44.734082   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:44.747393   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:44.747406   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:44.759273   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:44.759287   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:44.773355   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:44.773366   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:44.784878   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:44.784889   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:44.803881   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:44.803893   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:44.815855   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:44.815866   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:44.827776   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:44.827790   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:44.842923   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:44.842932   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:46.216701   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:46.216929   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:46.237656   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:46.237751   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:46.252203   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:46.252274   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:46.264158   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:46.264227   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:46.275286   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:46.275408   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:46.286251   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:46.286313   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:46.297164   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:46.297223   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:46.307551   18743 logs.go:276] 0 containers: []
	W0729 04:36:46.307561   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:46.307614   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:46.317821   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:46.317845   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:46.317850   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:46.334537   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:46.334552   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:46.352179   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:46.352194   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:46.367317   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:46.367328   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:46.391832   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:46.391847   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:46.405206   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:46.405221   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:46.416404   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:46.416416   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:46.428802   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:46.428812   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:46.441860   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:46.441874   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:46.464577   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:46.464585   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:46.468486   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:46.468492   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:46.481920   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:46.481930   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:46.496191   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:46.496205   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:46.507451   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:46.507462   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:46.519682   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:46.519696   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:46.557114   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:46.557127   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:46.579576   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:46.579590   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:49.119608   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:47.372358   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:54.121913   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:54.122277   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:54.157851   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:54.157999   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:54.176721   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:54.176820   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:54.196220   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:54.196295   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:54.220867   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:54.220944   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:54.232139   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:54.232204   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:54.245675   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:54.245741   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:54.255839   18743 logs.go:276] 0 containers: []
	W0729 04:36:54.255856   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:54.255917   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:54.266857   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:54.266876   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:54.266884   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:54.307759   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:54.307774   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:54.333785   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:54.333797   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:54.348749   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:54.348759   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:54.360563   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:54.360573   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:54.375131   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:54.375173   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:54.386723   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:54.386736   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:54.405543   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:54.405553   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:54.421688   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:54.421701   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:54.434560   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:54.434573   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:54.446315   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:54.446327   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:54.459843   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:54.459857   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:54.477936   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:54.477949   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:54.490214   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:54.490228   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:54.527087   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:54.527096   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:54.531731   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:54.531738   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:54.554698   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:54.554705   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:52.374524   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:52.374739   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:52.399775   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:36:52.399883   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:52.416063   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:36:52.416131   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:52.430870   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:36:52.430939   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:52.441918   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:36:52.441990   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:52.452806   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:36:52.452871   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:52.463730   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:36:52.463805   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:52.474444   18178 logs.go:276] 0 containers: []
	W0729 04:36:52.474456   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:52.474511   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:52.485177   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:36:52.485195   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:52.485200   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:52.489913   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:36:52.489920   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:36:52.505025   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:36:52.505034   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:36:52.516667   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:36:52.516680   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:52.528295   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:52.528304   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:52.566341   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:36:52.566351   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:36:52.583171   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:36:52.583183   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:36:52.594711   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:36:52.594725   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:36:52.606222   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:36:52.606231   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:36:52.620797   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:36:52.620809   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:36:52.638820   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:52.638834   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:52.662702   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:36:52.662720   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:36:52.680195   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:36:52.680206   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:36:52.694695   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:36:52.694707   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:36:52.706052   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:52.706067   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:55.242564   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:57.068829   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:00.244795   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:00.244966   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:37:00.266210   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:37:00.266301   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:37:00.281499   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:37:00.281580   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:37:00.294125   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:37:00.294195   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:37:00.306855   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:37:00.306924   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:37:00.317258   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:37:00.317326   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:37:00.327676   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:37:00.327743   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:37:00.342050   18178 logs.go:276] 0 containers: []
	W0729 04:37:00.342062   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:37:00.342124   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:37:00.352827   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:37:00.352846   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:37:00.352851   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:37:00.377382   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:37:00.377388   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:37:00.390769   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:37:00.390780   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:37:00.405424   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:37:00.405435   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:37:00.417568   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:37:00.417579   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:37:00.455492   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:37:00.455505   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:37:00.469713   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:37:00.469724   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:37:00.481440   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:37:00.481451   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:37:00.498876   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:37:00.498888   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:37:00.510812   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:37:00.510823   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:37:00.522796   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:37:00.522806   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:37:00.534333   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:37:00.534346   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:37:00.548309   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:37:00.548320   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:37:00.562850   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:37:00.562863   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:37:00.567392   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:37:00.567399   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:37:02.071161   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:02.071230   18743 kubeadm.go:597] duration metric: took 4m3.672683541s to restartPrimaryControlPlane
	W0729 04:37:02.071286   18743 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 04:37:02.071314   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 04:37:03.101472   18743 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.030169916s)
	I0729 04:37:03.101553   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 04:37:03.106417   18743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:37:03.109252   18743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:37:03.111784   18743 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:37:03.111790   18743 kubeadm.go:157] found existing configuration files:
	
	I0729 04:37:03.111815   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/admin.conf
	I0729 04:37:03.114821   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:37:03.114843   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:37:03.118325   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/kubelet.conf
	I0729 04:37:03.121035   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:37:03.121060   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:37:03.123547   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/controller-manager.conf
	I0729 04:37:03.126130   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:37:03.126152   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:37:03.129006   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/scheduler.conf
	I0729 04:37:03.131536   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:37:03.131561   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:37:03.134717   18743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 04:37:03.151909   18743 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 04:37:03.152010   18743 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 04:37:03.201599   18743 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 04:37:03.201691   18743 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 04:37:03.201764   18743 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 04:37:03.250769   18743 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 04:37:03.258943   18743 out.go:204]   - Generating certificates and keys ...
	I0729 04:37:03.258978   18743 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 04:37:03.259010   18743 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 04:37:03.259056   18743 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 04:37:03.259089   18743 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 04:37:03.259122   18743 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 04:37:03.259158   18743 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 04:37:03.259200   18743 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 04:37:03.259235   18743 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 04:37:03.259275   18743 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 04:37:03.259322   18743 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 04:37:03.259340   18743 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 04:37:03.259367   18743 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 04:37:03.497224   18743 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 04:37:03.630617   18743 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 04:37:03.683596   18743 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 04:37:03.720522   18743 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 04:37:03.751567   18743 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 04:37:03.752390   18743 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 04:37:03.752429   18743 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 04:37:03.839435   18743 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 04:37:03.842603   18743 out.go:204]   - Booting up control plane ...
	I0729 04:37:03.842660   18743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 04:37:03.842709   18743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 04:37:03.842788   18743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 04:37:03.842862   18743 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 04:37:03.842988   18743 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 04:37:03.105204   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:07.841111   18743 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001918 seconds
	I0729 04:37:07.841175   18743 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 04:37:07.844573   18743 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 04:37:08.353928   18743 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 04:37:08.354045   18743 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-514000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 04:37:08.859477   18743 kubeadm.go:310] [bootstrap-token] Using token: ttptur.zxqljb2zjeuj67nz
	I0729 04:37:08.865640   18743 out.go:204]   - Configuring RBAC rules ...
	I0729 04:37:08.865697   18743 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 04:37:08.865744   18743 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 04:37:08.872648   18743 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 04:37:08.873714   18743 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 04:37:08.874898   18743 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 04:37:08.875721   18743 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 04:37:08.879141   18743 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 04:37:09.059576   18743 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 04:37:09.263977   18743 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 04:37:09.264641   18743 kubeadm.go:310] 
	I0729 04:37:09.264672   18743 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 04:37:09.264675   18743 kubeadm.go:310] 
	I0729 04:37:09.264711   18743 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 04:37:09.264713   18743 kubeadm.go:310] 
	I0729 04:37:09.264729   18743 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 04:37:09.264808   18743 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 04:37:09.264861   18743 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 04:37:09.264867   18743 kubeadm.go:310] 
	I0729 04:37:09.264896   18743 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 04:37:09.264899   18743 kubeadm.go:310] 
	I0729 04:37:09.264933   18743 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 04:37:09.264936   18743 kubeadm.go:310] 
	I0729 04:37:09.264962   18743 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 04:37:09.265034   18743 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 04:37:09.265076   18743 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 04:37:09.265079   18743 kubeadm.go:310] 
	I0729 04:37:09.265135   18743 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 04:37:09.265193   18743 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 04:37:09.265204   18743 kubeadm.go:310] 
	I0729 04:37:09.265262   18743 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ttptur.zxqljb2zjeuj67nz \
	I0729 04:37:09.265316   18743 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:61250418a92f64cc21f880dcd095606f8607c1c11d80f25df99b7d542aabf8c2 \
	I0729 04:37:09.265326   18743 kubeadm.go:310] 	--control-plane 
	I0729 04:37:09.265328   18743 kubeadm.go:310] 
	I0729 04:37:09.265367   18743 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 04:37:09.265369   18743 kubeadm.go:310] 
	I0729 04:37:09.265408   18743 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ttptur.zxqljb2zjeuj67nz \
	I0729 04:37:09.265462   18743 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:61250418a92f64cc21f880dcd095606f8607c1c11d80f25df99b7d542aabf8c2 
	I0729 04:37:09.265516   18743 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 04:37:09.265524   18743 cni.go:84] Creating CNI manager for ""
	I0729 04:37:09.265532   18743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:37:09.269329   18743 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 04:37:09.277328   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 04:37:09.280436   18743 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 04:37:09.285190   18743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 04:37:09.285254   18743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 04:37:09.285310   18743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-514000 minikube.k8s.io/updated_at=2024_07_29T04_37_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=stopped-upgrade-514000 minikube.k8s.io/primary=true
	I0729 04:37:09.288658   18743 ops.go:34] apiserver oom_adj: -16
	I0729 04:37:09.348775   18743 kubeadm.go:1113] duration metric: took 63.562334ms to wait for elevateKubeSystemPrivileges
	I0729 04:37:09.348798   18743 kubeadm.go:394] duration metric: took 4m10.963877875s to StartCluster
	I0729 04:37:09.348807   18743 settings.go:142] acquiring lock: {Name:mk7d7deaddc5161eee59fbf4fca49f66001c194c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:37:09.348888   18743 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:37:09.349311   18743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/kubeconfig: {Name:mk01c5aa9060b104010e51a5796278cdf7a7a206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:37:09.349494   18743 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:37:09.349501   18743 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 04:37:09.349574   18743 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-514000"
	I0729 04:37:09.349577   18743 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:37:09.349587   18743 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-514000"
	W0729 04:37:09.349591   18743 addons.go:243] addon storage-provisioner should already be in state true
	I0729 04:37:09.349606   18743 host.go:66] Checking if "stopped-upgrade-514000" exists ...
	I0729 04:37:09.349614   18743 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-514000"
	I0729 04:37:09.349627   18743 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-514000"
	I0729 04:37:09.353332   18743 out.go:177] * Verifying Kubernetes components...
	I0729 04:37:09.354123   18743 kapi.go:59] client config for stopped-upgrade-514000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/client.key", CAFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060b8080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:37:09.357693   18743 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-514000"
	W0729 04:37:09.357697   18743 addons.go:243] addon default-storageclass should already be in state true
	I0729 04:37:09.357705   18743 host.go:66] Checking if "stopped-upgrade-514000" exists ...
	I0729 04:37:09.358215   18743 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 04:37:09.358220   18743 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 04:37:09.358225   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	I0729 04:37:09.361295   18743 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:37:09.365297   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:37:09.369329   18743 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:37:09.369337   18743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 04:37:09.369346   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	I0729 04:37:09.456086   18743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:37:09.461370   18743 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:37:09.461411   18743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:37:09.464970   18743 api_server.go:72] duration metric: took 115.468208ms to wait for apiserver process to appear ...
	I0729 04:37:09.464980   18743 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:37:09.464987   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:09.508148   18743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 04:37:09.521646   18743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:37:08.107239   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:08.107353   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:37:08.118255   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:37:08.118321   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:37:08.128846   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:37:08.128916   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:37:08.139991   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:37:08.140060   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:37:08.150656   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:37:08.150722   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:37:08.161226   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:37:08.161291   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:37:08.175193   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:37:08.175267   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:37:08.187005   18178 logs.go:276] 0 containers: []
	W0729 04:37:08.187020   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:37:08.187082   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:37:08.197401   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:37:08.197418   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:37:08.197423   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:37:08.211847   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:37:08.211861   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:37:08.223518   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:37:08.223532   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:37:08.241466   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:37:08.241476   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:37:08.276938   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:37:08.276952   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:37:08.312967   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:37:08.312981   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:37:08.330227   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:37:08.330237   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:37:08.355071   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:37:08.355082   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:37:08.371112   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:37:08.371124   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:37:08.383497   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:37:08.383508   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:37:08.388579   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:37:08.388587   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:37:08.404546   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:37:08.404559   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:37:08.416130   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:37:08.416141   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:37:08.428848   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:37:08.428860   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:37:08.440452   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:37:08.440466   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:37:10.953878   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:14.466397   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:14.466455   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:15.955679   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:15.955786   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:37:15.968673   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:37:15.968742   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:37:15.979242   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:37:15.979312   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:37:15.989516   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:37:15.989587   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:37:15.999970   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:37:16.000028   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:37:16.013314   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:37:16.013391   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:37:16.023622   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:37:16.023685   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:37:16.033767   18178 logs.go:276] 0 containers: []
	W0729 04:37:16.033780   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:37:16.033841   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:37:16.044255   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:37:16.044275   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:37:16.044281   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:37:16.056122   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:37:16.056136   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:37:16.070689   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:37:16.070703   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:37:16.082198   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:37:16.082210   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:37:16.093975   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:37:16.093985   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:37:16.105109   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:37:16.105126   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:37:16.140228   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:37:16.140239   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:37:16.158343   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:37:16.158354   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:37:16.174375   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:37:16.174386   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:37:16.189000   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:37:16.189013   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:37:16.225995   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:37:16.226008   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:37:16.230751   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:37:16.230760   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:37:16.246000   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:37:16.246012   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:37:16.259925   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:37:16.259939   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:37:16.272167   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:37:16.272178   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:37:19.466795   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:19.466825   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:18.797583   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:24.466922   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:24.467001   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:23.799680   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:23.799821   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:37:23.810845   18178 logs.go:276] 1 containers: [bd9f32999555]
	I0729 04:37:23.810919   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:37:23.821960   18178 logs.go:276] 1 containers: [b424b3acc7a7]
	I0729 04:37:23.822031   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:37:23.833880   18178 logs.go:276] 4 containers: [62d0a42eab2e 53a1b1e2c0c0 87f9f4ae3f9f c90a03aafe4d]
	I0729 04:37:23.833953   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:37:23.844776   18178 logs.go:276] 1 containers: [515fc9a50a62]
	I0729 04:37:23.844847   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:37:23.856279   18178 logs.go:276] 1 containers: [4347c8f1c9c6]
	I0729 04:37:23.856353   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:37:23.867986   18178 logs.go:276] 1 containers: [345f45bd5419]
	I0729 04:37:23.868060   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:37:23.884144   18178 logs.go:276] 0 containers: []
	W0729 04:37:23.884157   18178 logs.go:278] No container was found matching "kindnet"
	I0729 04:37:23.884223   18178 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:37:23.895572   18178 logs.go:276] 1 containers: [6a2fb20a4d04]
	I0729 04:37:23.895590   18178 logs.go:123] Gathering logs for kube-apiserver [bd9f32999555] ...
	I0729 04:37:23.895596   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd9f32999555"
	I0729 04:37:23.911914   18178 logs.go:123] Gathering logs for kube-scheduler [515fc9a50a62] ...
	I0729 04:37:23.911930   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515fc9a50a62"
	I0729 04:37:23.927750   18178 logs.go:123] Gathering logs for kube-controller-manager [345f45bd5419] ...
	I0729 04:37:23.927763   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 345f45bd5419"
	I0729 04:37:23.945788   18178 logs.go:123] Gathering logs for Docker ...
	I0729 04:37:23.945806   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:37:23.971420   18178 logs.go:123] Gathering logs for dmesg ...
	I0729 04:37:23.971434   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:37:23.976842   18178 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:37:23.976856   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:37:24.016301   18178 logs.go:123] Gathering logs for coredns [53a1b1e2c0c0] ...
	I0729 04:37:24.016314   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53a1b1e2c0c0"
	I0729 04:37:24.028277   18178 logs.go:123] Gathering logs for coredns [c90a03aafe4d] ...
	I0729 04:37:24.028290   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c90a03aafe4d"
	I0729 04:37:24.044816   18178 logs.go:123] Gathering logs for kube-proxy [4347c8f1c9c6] ...
	I0729 04:37:24.044828   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4347c8f1c9c6"
	I0729 04:37:24.056867   18178 logs.go:123] Gathering logs for kubelet ...
	I0729 04:37:24.056878   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:37:24.096640   18178 logs.go:123] Gathering logs for coredns [62d0a42eab2e] ...
	I0729 04:37:24.096661   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62d0a42eab2e"
	I0729 04:37:24.108927   18178 logs.go:123] Gathering logs for coredns [87f9f4ae3f9f] ...
	I0729 04:37:24.108940   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 87f9f4ae3f9f"
	I0729 04:37:24.120804   18178 logs.go:123] Gathering logs for storage-provisioner [6a2fb20a4d04] ...
	I0729 04:37:24.120818   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2fb20a4d04"
	I0729 04:37:24.133481   18178 logs.go:123] Gathering logs for etcd [b424b3acc7a7] ...
	I0729 04:37:24.133493   18178 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b424b3acc7a7"
	I0729 04:37:24.148923   18178 logs.go:123] Gathering logs for container status ...
	I0729 04:37:24.148941   18178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:37:26.663650   18178 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:29.467185   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:29.467232   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:31.665830   18178 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:31.670332   18178 out.go:177] 
	W0729 04:37:31.673315   18178 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 04:37:31.673328   18178 out.go:239] * 
	W0729 04:37:31.673938   18178 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:37:31.689277   18178 out.go:177] 
	I0729 04:37:34.467624   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:34.467675   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:39.468162   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:39.468192   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 04:37:39.857404   18743 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 04:37:39.861946   18743 out.go:177] * Enabled addons: storage-provisioner
	I0729 04:37:39.869018   18743 addons.go:510] duration metric: took 30.520270708s for enable addons: enabled=[storage-provisioner]
	I0729 04:37:44.469049   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:44.469114   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-29 11:28:39 UTC, ends at Mon 2024-07-29 11:37:47 UTC. --
	Jul 29 11:37:32 running-upgrade-317000 dockerd[3216]: time="2024-07-29T11:37:32.531913208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 29 11:37:32 running-upgrade-317000 dockerd[3216]: time="2024-07-29T11:37:32.531991117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 29 11:37:32 running-upgrade-317000 dockerd[3216]: time="2024-07-29T11:37:32.532002033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 29 11:37:32 running-upgrade-317000 dockerd[3216]: time="2024-07-29T11:37:32.532105940Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c7bacc1450bd44fad50c0792fe8803e1e0757469562734f8630778cf86668f4a pid=18952 runtime=io.containerd.runc.v2
	Jul 29 11:37:33 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:33Z" level=error msg="ContainerStats resp: {0x400084f580 linux}"
	Jul 29 11:37:33 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:33Z" level=error msg="ContainerStats resp: {0x400084f980 linux}"
	Jul 29 11:37:33 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:33Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 11:37:34 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:34Z" level=error msg="ContainerStats resp: {0x4000928800 linux}"
	Jul 29 11:37:34 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:34Z" level=error msg="ContainerStats resp: {0x4000928940 linux}"
	Jul 29 11:37:34 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:34Z" level=error msg="ContainerStats resp: {0x4000928a80 linux}"
	Jul 29 11:37:34 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:34Z" level=error msg="ContainerStats resp: {0x4000730ac0 linux}"
	Jul 29 11:37:34 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:34Z" level=error msg="ContainerStats resp: {0x4000554080 linux}"
	Jul 29 11:37:34 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:34Z" level=error msg="ContainerStats resp: {0x4000554640 linux}"
	Jul 29 11:37:38 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:38Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 11:37:43 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 29 11:37:44 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:44Z" level=error msg="ContainerStats resp: {0x400062b040 linux}"
	Jul 29 11:37:44 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:44Z" level=error msg="ContainerStats resp: {0x4000555e40 linux}"
	Jul 29 11:37:45 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:45Z" level=error msg="ContainerStats resp: {0x400084fac0 linux}"
	Jul 29 11:37:46 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:46Z" level=error msg="ContainerStats resp: {0x4000730680 linux}"
	Jul 29 11:37:46 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:46Z" level=error msg="ContainerStats resp: {0x4000929580 linux}"
	Jul 29 11:37:46 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:46Z" level=error msg="ContainerStats resp: {0x4000731380 linux}"
	Jul 29 11:37:46 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:46Z" level=error msg="ContainerStats resp: {0x4000731540 linux}"
	Jul 29 11:37:46 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:46Z" level=error msg="ContainerStats resp: {0x4000731dc0 linux}"
	Jul 29 11:37:46 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:46Z" level=error msg="ContainerStats resp: {0x4000730280 linux}"
	Jul 29 11:37:46 running-upgrade-317000 cri-dockerd[3051]: time="2024-07-29T11:37:46Z" level=error msg="ContainerStats resp: {0x4000730880 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	c7bacc1450bd4       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   670e70dee2a39
	1e4c8d7d274a1       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   7c9a2bc96b145
	62d0a42eab2e6       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   670e70dee2a39
	53a1b1e2c0c0b       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   7c9a2bc96b145
	4347c8f1c9c64       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   3aeba34bb1da2
	6a2fb20a4d048       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   e601ebf4acf6d
	515fc9a50a623       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   678ec83fb91ac
	b424b3acc7a7a       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   12fa9558f1370
	bd9f32999555a       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   359eb3c3add65
	345f45bd5419d       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   3cb9d4a379957
	
	
	==> coredns [1e4c8d7d274a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 493778867169167210.1713574980122374627. HINFO: read udp 10.244.0.3:58297->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 493778867169167210.1713574980122374627. HINFO: read udp 10.244.0.3:46270->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 493778867169167210.1713574980122374627. HINFO: read udp 10.244.0.3:52376->10.0.2.3:53: i/o timeout
	
	
	==> coredns [53a1b1e2c0c0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2953681132913450251.4423303313546345180. HINFO: read udp 10.244.0.3:57616->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2953681132913450251.4423303313546345180. HINFO: read udp 10.244.0.3:50889->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2953681132913450251.4423303313546345180. HINFO: read udp 10.244.0.3:54408->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2953681132913450251.4423303313546345180. HINFO: read udp 10.244.0.3:55983->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2953681132913450251.4423303313546345180. HINFO: read udp 10.244.0.3:36682->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2953681132913450251.4423303313546345180. HINFO: read udp 10.244.0.3:56421->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2953681132913450251.4423303313546345180. HINFO: read udp 10.244.0.3:51542->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2953681132913450251.4423303313546345180. HINFO: read udp 10.244.0.3:43379->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2953681132913450251.4423303313546345180. HINFO: read udp 10.244.0.3:51981->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2953681132913450251.4423303313546345180. HINFO: read udp 10.244.0.3:42167->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [62d0a42eab2e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5654897184926892882.6372190765479320865. HINFO: read udp 10.244.0.2:43456->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5654897184926892882.6372190765479320865. HINFO: read udp 10.244.0.2:40096->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5654897184926892882.6372190765479320865. HINFO: read udp 10.244.0.2:33956->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5654897184926892882.6372190765479320865. HINFO: read udp 10.244.0.2:46683->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5654897184926892882.6372190765479320865. HINFO: read udp 10.244.0.2:56247->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5654897184926892882.6372190765479320865. HINFO: read udp 10.244.0.2:48233->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5654897184926892882.6372190765479320865. HINFO: read udp 10.244.0.2:34942->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5654897184926892882.6372190765479320865. HINFO: read udp 10.244.0.2:41583->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5654897184926892882.6372190765479320865. HINFO: read udp 10.244.0.2:60941->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5654897184926892882.6372190765479320865. HINFO: read udp 10.244.0.2:33133->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c7bacc1450bd] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6181069564951013968.2220722157870089371. HINFO: read udp 10.244.0.2:41282->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6181069564951013968.2220722157870089371. HINFO: read udp 10.244.0.2:49753->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6181069564951013968.2220722157870089371. HINFO: read udp 10.244.0.2:60594->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-317000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-317000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff
	                    minikube.k8s.io/name=running-upgrade-317000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T04_33_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:33:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-317000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:37:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:33:30 +0000   Mon, 29 Jul 2024 11:33:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:33:30 +0000   Mon, 29 Jul 2024 11:33:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:33:30 +0000   Mon, 29 Jul 2024 11:33:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:33:30 +0000   Mon, 29 Jul 2024 11:33:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-317000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e095590f2ad40508b118d104997170f
	  System UUID:                7e095590f2ad40508b118d104997170f
	  Boot ID:                    925c8caf-1460-4b78-9d27-9db9901ca40d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-48c5k                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-fzmd6                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-317000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-317000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-317000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-rvbbv                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-317000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-317000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-317000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-317000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-317000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-317000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-317000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-317000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-317000 event: Registered Node running-upgrade-317000 in Controller
	
	
	==> dmesg <==
	[  +1.718808] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.088372] systemd-fstab-generator[890]: Ignoring "noauto" for root device
	[  +0.081336] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +1.142902] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.076427] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.082624] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.769175] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[Jul29 11:29] systemd-fstab-generator[1933]: Ignoring "noauto" for root device
	[  +2.862172] systemd-fstab-generator[2215]: Ignoring "noauto" for root device
	[  +0.146275] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[  +0.094996] systemd-fstab-generator[2261]: Ignoring "noauto" for root device
	[  +0.098698] systemd-fstab-generator[2274]: Ignoring "noauto" for root device
	[  +3.296596] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.212056] systemd-fstab-generator[3006]: Ignoring "noauto" for root device
	[  +0.075587] systemd-fstab-generator[3019]: Ignoring "noauto" for root device
	[  +0.091021] systemd-fstab-generator[3030]: Ignoring "noauto" for root device
	[  +0.087988] systemd-fstab-generator[3044]: Ignoring "noauto" for root device
	[  +2.286622] systemd-fstab-generator[3195]: Ignoring "noauto" for root device
	[  +3.750849] systemd-fstab-generator[3603]: Ignoring "noauto" for root device
	[  +1.125089] systemd-fstab-generator[3896]: Ignoring "noauto" for root device
	[ +18.281112] kauditd_printk_skb: 68 callbacks suppressed
	[Jul29 11:33] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.327515] systemd-fstab-generator[11989]: Ignoring "noauto" for root device
	[  +5.645251] systemd-fstab-generator[12595]: Ignoring "noauto" for root device
	[  +0.469315] systemd-fstab-generator[12728]: Ignoring "noauto" for root device
	
	
	==> etcd [b424b3acc7a7] <==
	{"level":"info","ts":"2024-07-29T11:33:26.431Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T11:33:26.431Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T11:33:26.431Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-07-29T11:33:26.431Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T11:33:26.431Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-29T11:33:26.431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-29T11:33:26.431Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-29T11:33:27.204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T11:33:27.204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T11:33:27.204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-29T11:33:27.204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T11:33:27.204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T11:33:27.204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T11:33:27.204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-29T11:33:27.205Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:33:27.206Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:33:27.206Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:33:27.207Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T11:33:27.207Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-317000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T11:33:27.207Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:33:27.207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T11:33:27.207Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T11:33:27.207Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T11:33:27.209Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T11:33:27.210Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 11:37:48 up 9 min,  0 users,  load average: 0.20, 0.32, 0.18
	Linux running-upgrade-317000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [bd9f32999555] <==
	I0729 11:33:28.412359       1 controller.go:611] quota admission added evaluator for: namespaces
	I0729 11:33:28.459180       1 cache.go:39] Caches are synced for autoregister controller
	I0729 11:33:28.459253       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 11:33:28.459292       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 11:33:28.461160       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 11:33:28.475645       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 11:33:28.477466       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 11:33:29.220502       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 11:33:29.363370       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0729 11:33:29.365523       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0729 11:33:29.365540       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 11:33:29.522924       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 11:33:29.532945       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 11:33:29.617045       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0729 11:33:29.619367       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0729 11:33:29.619742       1 controller.go:611] quota admission added evaluator for: endpoints
	I0729 11:33:29.621088       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 11:33:30.508395       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 11:33:30.795302       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 11:33:30.799046       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0729 11:33:30.803584       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 11:33:30.850620       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 11:33:44.210594       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0729 11:33:44.259872       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0729 11:33:44.758983       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [345f45bd5419] <==
	I0729 11:33:43.358197       1 shared_informer.go:262] Caches are synced for crt configmap
	I0729 11:33:43.359042       1 shared_informer.go:262] Caches are synced for cronjob
	I0729 11:33:43.361865       1 shared_informer.go:262] Caches are synced for TTL
	I0729 11:33:43.458463       1 shared_informer.go:262] Caches are synced for taint
	I0729 11:33:43.458527       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0729 11:33:43.458553       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-317000. Assuming now as a timestamp.
	I0729 11:33:43.458596       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0729 11:33:43.458667       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0729 11:33:43.458802       1 event.go:294] "Event occurred" object="running-upgrade-317000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-317000 event: Registered Node running-upgrade-317000 in Controller"
	I0729 11:33:43.459436       1 shared_informer.go:262] Caches are synced for stateful set
	I0729 11:33:43.509984       1 shared_informer.go:262] Caches are synced for daemon sets
	I0729 11:33:43.536118       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 11:33:43.539304       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0729 11:33:43.547013       1 shared_informer.go:262] Caches are synced for deployment
	I0729 11:33:43.562589       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 11:33:43.605408       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0729 11:33:43.608566       1 shared_informer.go:262] Caches are synced for disruption
	I0729 11:33:43.608590       1 disruption.go:371] Sending events to api server.
	I0729 11:33:43.972916       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 11:33:44.043576       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 11:33:44.043584       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0729 11:33:44.214011       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rvbbv"
	I0729 11:33:44.260922       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0729 11:33:44.368475       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-48c5k"
	I0729 11:33:44.373411       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-fzmd6"
	
	
	==> kube-proxy [4347c8f1c9c6] <==
	I0729 11:33:44.729343       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0729 11:33:44.729375       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0729 11:33:44.729387       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 11:33:44.755283       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 11:33:44.755296       1 server_others.go:206] "Using iptables Proxier"
	I0729 11:33:44.755312       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 11:33:44.755536       1 server.go:661] "Version info" version="v1.24.1"
	I0729 11:33:44.755540       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 11:33:44.755837       1 config.go:317] "Starting service config controller"
	I0729 11:33:44.755843       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 11:33:44.755851       1 config.go:226] "Starting endpoint slice config controller"
	I0729 11:33:44.755853       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 11:33:44.756156       1 config.go:444] "Starting node config controller"
	I0729 11:33:44.756159       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 11:33:44.856575       1 shared_informer.go:262] Caches are synced for node config
	I0729 11:33:44.856592       1 shared_informer.go:262] Caches are synced for service config
	I0729 11:33:44.856604       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [515fc9a50a62] <==
	W0729 11:33:28.416628       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 11:33:28.416648       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 11:33:28.416697       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:33:28.416719       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 11:33:28.416763       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:33:28.416785       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 11:33:28.416819       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:33:28.416852       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 11:33:28.416898       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:33:28.416917       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 11:33:28.416962       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:33:28.416979       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 11:33:28.417019       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:33:28.417040       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 11:33:28.417064       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:33:28.417097       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:33:28.417139       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:33:28.417159       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 11:33:29.361773       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:33:29.361928       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 11:33:29.390863       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:33:29.390944       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:33:29.428994       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:33:29.429074       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0729 11:33:31.515517       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-29 11:28:39 UTC, ends at Mon 2024-07-29 11:37:48 UTC. --
	Jul 29 11:33:32 running-upgrade-317000 kubelet[12601]: E0729 11:33:32.628929   12601 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-317000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-317000"
	Jul 29 11:33:32 running-upgrade-317000 kubelet[12601]: E0729 11:33:32.831440   12601 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-317000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-317000"
	Jul 29 11:33:33 running-upgrade-317000 kubelet[12601]: I0729 11:33:33.026884   12601 request.go:601] Waited for 1.113711426s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 29 11:33:33 running-upgrade-317000 kubelet[12601]: E0729 11:33:33.029797   12601 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-317000\" already exists" pod="kube-system/etcd-running-upgrade-317000"
	Jul 29 11:33:43 running-upgrade-317000 kubelet[12601]: I0729 11:33:43.342501   12601 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 11:33:43 running-upgrade-317000 kubelet[12601]: I0729 11:33:43.342837   12601 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 11:33:43 running-upgrade-317000 kubelet[12601]: I0729 11:33:43.463701   12601 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 11:33:43 running-upgrade-317000 kubelet[12601]: I0729 11:33:43.644091   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82229\" (UniqueName: \"kubernetes.io/projected/8c032d4b-42dc-480c-9486-59b3da9c7635-kube-api-access-82229\") pod \"storage-provisioner\" (UID: \"8c032d4b-42dc-480c-9486-59b3da9c7635\") " pod="kube-system/storage-provisioner"
	Jul 29 11:33:43 running-upgrade-317000 kubelet[12601]: I0729 11:33:43.644120   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8c032d4b-42dc-480c-9486-59b3da9c7635-tmp\") pod \"storage-provisioner\" (UID: \"8c032d4b-42dc-480c-9486-59b3da9c7635\") " pod="kube-system/storage-provisioner"
	Jul 29 11:33:43 running-upgrade-317000 kubelet[12601]: E0729 11:33:43.750844   12601 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 29 11:33:43 running-upgrade-317000 kubelet[12601]: E0729 11:33:43.750871   12601 projected.go:192] Error preparing data for projected volume kube-api-access-82229 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 29 11:33:43 running-upgrade-317000 kubelet[12601]: E0729 11:33:43.750920   12601 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/8c032d4b-42dc-480c-9486-59b3da9c7635-kube-api-access-82229 podName:8c032d4b-42dc-480c-9486-59b3da9c7635 nodeName:}" failed. No retries permitted until 2024-07-29 11:33:44.250902568 +0000 UTC m=+13.465463436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-82229" (UniqueName: "kubernetes.io/projected/8c032d4b-42dc-480c-9486-59b3da9c7635-kube-api-access-82229") pod "storage-provisioner" (UID: "8c032d4b-42dc-480c-9486-59b3da9c7635") : configmap "kube-root-ca.crt" not found
	Jul 29 11:33:44 running-upgrade-317000 kubelet[12601]: I0729 11:33:44.216712   12601 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 11:33:44 running-upgrade-317000 kubelet[12601]: I0729 11:33:44.252613   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8859b9ab-2218-4625-b1f6-99f411ea45a0-lib-modules\") pod \"kube-proxy-rvbbv\" (UID: \"8859b9ab-2218-4625-b1f6-99f411ea45a0\") " pod="kube-system/kube-proxy-rvbbv"
	Jul 29 11:33:44 running-upgrade-317000 kubelet[12601]: I0729 11:33:44.252649   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8859b9ab-2218-4625-b1f6-99f411ea45a0-kube-proxy\") pod \"kube-proxy-rvbbv\" (UID: \"8859b9ab-2218-4625-b1f6-99f411ea45a0\") " pod="kube-system/kube-proxy-rvbbv"
	Jul 29 11:33:44 running-upgrade-317000 kubelet[12601]: I0729 11:33:44.252664   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8859b9ab-2218-4625-b1f6-99f411ea45a0-xtables-lock\") pod \"kube-proxy-rvbbv\" (UID: \"8859b9ab-2218-4625-b1f6-99f411ea45a0\") " pod="kube-system/kube-proxy-rvbbv"
	Jul 29 11:33:44 running-upgrade-317000 kubelet[12601]: I0729 11:33:44.252676   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hkct\" (UniqueName: \"kubernetes.io/projected/8859b9ab-2218-4625-b1f6-99f411ea45a0-kube-api-access-5hkct\") pod \"kube-proxy-rvbbv\" (UID: \"8859b9ab-2218-4625-b1f6-99f411ea45a0\") " pod="kube-system/kube-proxy-rvbbv"
	Jul 29 11:33:44 running-upgrade-317000 kubelet[12601]: I0729 11:33:44.370125   12601 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 11:33:44 running-upgrade-317000 kubelet[12601]: I0729 11:33:44.374201   12601 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 11:33:44 running-upgrade-317000 kubelet[12601]: I0729 11:33:44.557328   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9v2\" (UniqueName: \"kubernetes.io/projected/ef95070b-6dab-4aeb-a43c-29f5e4339c97-kube-api-access-5d9v2\") pod \"coredns-6d4b75cb6d-fzmd6\" (UID: \"ef95070b-6dab-4aeb-a43c-29f5e4339c97\") " pod="kube-system/coredns-6d4b75cb6d-fzmd6"
	Jul 29 11:33:44 running-upgrade-317000 kubelet[12601]: I0729 11:33:44.557407   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef95070b-6dab-4aeb-a43c-29f5e4339c97-config-volume\") pod \"coredns-6d4b75cb6d-fzmd6\" (UID: \"ef95070b-6dab-4aeb-a43c-29f5e4339c97\") " pod="kube-system/coredns-6d4b75cb6d-fzmd6"
	Jul 29 11:33:44 running-upgrade-317000 kubelet[12601]: I0729 11:33:44.557424   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87143065-305a-4713-b108-2e8f732daeb5-config-volume\") pod \"coredns-6d4b75cb6d-48c5k\" (UID: \"87143065-305a-4713-b108-2e8f732daeb5\") " pod="kube-system/coredns-6d4b75cb6d-48c5k"
	Jul 29 11:33:44 running-upgrade-317000 kubelet[12601]: I0729 11:33:44.557439   12601 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tv7n\" (UniqueName: \"kubernetes.io/projected/87143065-305a-4713-b108-2e8f732daeb5-kube-api-access-8tv7n\") pod \"coredns-6d4b75cb6d-48c5k\" (UID: \"87143065-305a-4713-b108-2e8f732daeb5\") " pod="kube-system/coredns-6d4b75cb6d-48c5k"
	Jul 29 11:37:33 running-upgrade-317000 kubelet[12601]: I0729 11:37:33.215696   12601 scope.go:110] "RemoveContainer" containerID="c90a03aafe4da52d48fd72b7c71feae027e3f5327466404fdb1fca69fedf5c45"
	Jul 29 11:37:33 running-upgrade-317000 kubelet[12601]: I0729 11:37:33.234360   12601 scope.go:110] "RemoveContainer" containerID="87f9f4ae3f9fa647e3406d5ac9a84d5888eb112966a23eb6e6e5ccc7d420f403"
	
	
	==> storage-provisioner [6a2fb20a4d04] <==
	I0729 11:33:44.575684       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 11:33:44.580879       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 11:33:44.580898       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 11:33:44.584259       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 11:33:44.584403       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"47c675d4-7156-45f6-ac2e-77b7042939e9", APIVersion:"v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-317000_3ba537f0-c892-409e-9741-e1f509fe8e44 became leader
	I0729 11:33:44.584417       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-317000_3ba537f0-c892-409e-9741-e1f509fe8e44!
	I0729 11:33:44.685462       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-317000_3ba537f0-c892-409e-9741-e1f509fe8e44!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-317000 -n running-upgrade-317000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-317000 -n running-upgrade-317000: exit status 2 (15.680303417s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-317000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-317000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-317000
--- FAIL: TestRunningBinaryUpgrade (587.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-813000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-813000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.9087895s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-813000" primary control-plane node in "kubernetes-upgrade-813000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-813000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:31:18.937058   18618 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:31:18.937220   18618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:31:18.937223   18618 out.go:304] Setting ErrFile to fd 2...
	I0729 04:31:18.937225   18618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:31:18.937343   18618 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:31:18.938522   18618 out.go:298] Setting JSON to false
	I0729 04:31:18.954876   18618 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9047,"bootTime":1722243631,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:31:18.954955   18618 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:31:18.960051   18618 out.go:177] * [kubernetes-upgrade-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:31:18.968029   18618 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:31:18.968097   18618 notify.go:220] Checking for updates...
	I0729 04:31:18.975012   18618 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:31:18.979022   18618 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:31:18.982976   18618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:31:18.985977   18618 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:31:18.989006   18618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:31:18.992292   18618 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:31:18.992368   18618 config.go:182] Loaded profile config "running-upgrade-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:31:18.992428   18618 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:31:18.996077   18618 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:31:19.002941   18618 start.go:297] selected driver: qemu2
	I0729 04:31:19.002949   18618 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:31:19.002955   18618 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:31:19.005180   18618 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:31:19.008918   18618 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:31:19.012080   18618 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:31:19.012097   18618 cni.go:84] Creating CNI manager for ""
	I0729 04:31:19.012104   18618 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 04:31:19.012134   18618 start.go:340] cluster config:
	{Name:kubernetes-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:31:19.015780   18618 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:31:19.023960   18618 out.go:177] * Starting "kubernetes-upgrade-813000" primary control-plane node in "kubernetes-upgrade-813000" cluster
	I0729 04:31:19.027920   18618 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:31:19.027939   18618 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:31:19.027950   18618 cache.go:56] Caching tarball of preloaded images
	I0729 04:31:19.028027   18618 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:31:19.028032   18618 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 04:31:19.028111   18618 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/kubernetes-upgrade-813000/config.json ...
	I0729 04:31:19.028121   18618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/kubernetes-upgrade-813000/config.json: {Name:mk0073423c6995bbe653f2c4e85389d3549b05df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:31:19.028477   18618 start.go:360] acquireMachinesLock for kubernetes-upgrade-813000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:31:19.028508   18618 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "kubernetes-upgrade-813000"
	I0729 04:31:19.028519   18618 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:31:19.028554   18618 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:31:19.032039   18618 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:31:19.056464   18618 start.go:159] libmachine.API.Create for "kubernetes-upgrade-813000" (driver="qemu2")
	I0729 04:31:19.056489   18618 client.go:168] LocalClient.Create starting
	I0729 04:31:19.056566   18618 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:31:19.056595   18618 main.go:141] libmachine: Decoding PEM data...
	I0729 04:31:19.056609   18618 main.go:141] libmachine: Parsing certificate...
	I0729 04:31:19.056646   18618 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:31:19.056669   18618 main.go:141] libmachine: Decoding PEM data...
	I0729 04:31:19.056680   18618 main.go:141] libmachine: Parsing certificate...
	I0729 04:31:19.057094   18618 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:31:19.208515   18618 main.go:141] libmachine: Creating SSH key...
	I0729 04:31:19.361623   18618 main.go:141] libmachine: Creating Disk image...
	I0729 04:31:19.361632   18618 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:31:19.361870   18618 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2
	I0729 04:31:19.371962   18618 main.go:141] libmachine: STDOUT: 
	I0729 04:31:19.371985   18618 main.go:141] libmachine: STDERR: 
	I0729 04:31:19.372039   18618 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2 +20000M
	I0729 04:31:19.380080   18618 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:31:19.380096   18618 main.go:141] libmachine: STDERR: 
	I0729 04:31:19.380112   18618 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2
	I0729 04:31:19.380123   18618 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:31:19.380135   18618 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:31:19.380161   18618 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:34:73:ad:61:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2
	I0729 04:31:19.381819   18618 main.go:141] libmachine: STDOUT: 
	I0729 04:31:19.381835   18618 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:31:19.381853   18618 client.go:171] duration metric: took 325.368833ms to LocalClient.Create
	I0729 04:31:21.383925   18618 start.go:128] duration metric: took 2.355396459s to createHost
	I0729 04:31:21.383957   18618 start.go:83] releasing machines lock for "kubernetes-upgrade-813000", held for 2.355501292s
	W0729 04:31:21.383982   18618 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:31:21.388862   18618 out.go:177] * Deleting "kubernetes-upgrade-813000" in qemu2 ...
	W0729 04:31:21.414393   18618 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:31:21.414403   18618 start.go:729] Will try again in 5 seconds ...
	I0729 04:31:26.414710   18618 start.go:360] acquireMachinesLock for kubernetes-upgrade-813000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:31:26.415257   18618 start.go:364] duration metric: took 422.333µs to acquireMachinesLock for "kubernetes-upgrade-813000"
	I0729 04:31:26.415399   18618 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:31:26.415699   18618 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:31:26.425114   18618 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:31:26.473907   18618 start.go:159] libmachine.API.Create for "kubernetes-upgrade-813000" (driver="qemu2")
	I0729 04:31:26.473968   18618 client.go:168] LocalClient.Create starting
	I0729 04:31:26.474076   18618 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:31:26.474143   18618 main.go:141] libmachine: Decoding PEM data...
	I0729 04:31:26.474165   18618 main.go:141] libmachine: Parsing certificate...
	I0729 04:31:26.474230   18618 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:31:26.474287   18618 main.go:141] libmachine: Decoding PEM data...
	I0729 04:31:26.474311   18618 main.go:141] libmachine: Parsing certificate...
	I0729 04:31:26.474852   18618 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:31:26.632478   18618 main.go:141] libmachine: Creating SSH key...
	I0729 04:31:26.755014   18618 main.go:141] libmachine: Creating Disk image...
	I0729 04:31:26.755023   18618 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:31:26.755280   18618 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2
	I0729 04:31:26.765162   18618 main.go:141] libmachine: STDOUT: 
	I0729 04:31:26.765183   18618 main.go:141] libmachine: STDERR: 
	I0729 04:31:26.765235   18618 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2 +20000M
	I0729 04:31:26.773362   18618 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:31:26.773376   18618 main.go:141] libmachine: STDERR: 
	I0729 04:31:26.773386   18618 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2
	I0729 04:31:26.773395   18618 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:31:26.773408   18618 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:31:26.773439   18618 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:25:6c:b5:1b:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2
	I0729 04:31:26.775268   18618 main.go:141] libmachine: STDOUT: 
	I0729 04:31:26.775292   18618 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:31:26.775304   18618 client.go:171] duration metric: took 301.339375ms to LocalClient.Create
	I0729 04:31:28.777454   18618 start.go:128] duration metric: took 2.3617815s to createHost
	I0729 04:31:28.777545   18618 start.go:83] releasing machines lock for "kubernetes-upgrade-813000", held for 2.362323916s
	W0729 04:31:28.777866   18618 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-813000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-813000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:31:28.787367   18618 out.go:177] 
	W0729 04:31:28.793367   18618 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:31:28.793385   18618 out.go:239] * 
	* 
	W0729 04:31:28.794928   18618 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:31:28.805362   18618 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-813000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-813000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-813000: (2.124276542s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-813000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-813000 status --format={{.Host}}: exit status 7 (54.094792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-813000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-813000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.192846083s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-813000" primary control-plane node in "kubernetes-upgrade-813000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-813000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:31:31.028346   18658 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:31:31.028512   18658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:31:31.028515   18658 out.go:304] Setting ErrFile to fd 2...
	I0729 04:31:31.028518   18658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:31:31.028649   18658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:31:31.029693   18658 out.go:298] Setting JSON to false
	I0729 04:31:31.046079   18658 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9060,"bootTime":1722243631,"procs":499,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:31:31.046155   18658 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:31:31.051696   18658 out.go:177] * [kubernetes-upgrade-813000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:31:31.058739   18658 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:31:31.058792   18658 notify.go:220] Checking for updates...
	I0729 04:31:31.065718   18658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:31:31.068748   18658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:31:31.072684   18658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:31:31.075680   18658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:31:31.080835   18658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:31:31.083941   18658 config.go:182] Loaded profile config "kubernetes-upgrade-813000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 04:31:31.084240   18658 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:31:31.087747   18658 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:31:31.094699   18658 start.go:297] selected driver: qemu2
	I0729 04:31:31.094706   18658 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-813000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:31:31.094770   18658 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:31:31.097188   18658 cni.go:84] Creating CNI manager for ""
	I0729 04:31:31.097205   18658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:31:31.097243   18658 start.go:340] cluster config:
	{Name:kubernetes-upgrade-813000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-813000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:31:31.100716   18658 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:31:31.108725   18658 out.go:177] * Starting "kubernetes-upgrade-813000" primary control-plane node in "kubernetes-upgrade-813000" cluster
	I0729 04:31:31.112684   18658 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:31:31.112700   18658 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 04:31:31.112711   18658 cache.go:56] Caching tarball of preloaded images
	I0729 04:31:31.112769   18658 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:31:31.112775   18658 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 04:31:31.112840   18658 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/kubernetes-upgrade-813000/config.json ...
	I0729 04:31:31.113302   18658 start.go:360] acquireMachinesLock for kubernetes-upgrade-813000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:31:31.113330   18658 start.go:364] duration metric: took 21.416µs to acquireMachinesLock for "kubernetes-upgrade-813000"
	I0729 04:31:31.113339   18658 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:31:31.113345   18658 fix.go:54] fixHost starting: 
	I0729 04:31:31.113453   18658 fix.go:112] recreateIfNeeded on kubernetes-upgrade-813000: state=Stopped err=<nil>
	W0729 04:31:31.113461   18658 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:31:31.121770   18658 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-813000" ...
	I0729 04:31:31.125670   18658 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:31:31.125710   18658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:25:6c:b5:1b:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2
	I0729 04:31:31.127816   18658 main.go:141] libmachine: STDOUT: 
	I0729 04:31:31.127834   18658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:31:31.127873   18658 fix.go:56] duration metric: took 14.529ms for fixHost
	I0729 04:31:31.127877   18658 start.go:83] releasing machines lock for "kubernetes-upgrade-813000", held for 14.543375ms
	W0729 04:31:31.127883   18658 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:31:31.127915   18658 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:31:31.127920   18658 start.go:729] Will try again in 5 seconds ...
	I0729 04:31:36.130025   18658 start.go:360] acquireMachinesLock for kubernetes-upgrade-813000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:31:36.130511   18658 start.go:364] duration metric: took 334.125µs to acquireMachinesLock for "kubernetes-upgrade-813000"
	I0729 04:31:36.130593   18658 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:31:36.130613   18658 fix.go:54] fixHost starting: 
	I0729 04:31:36.131306   18658 fix.go:112] recreateIfNeeded on kubernetes-upgrade-813000: state=Stopped err=<nil>
	W0729 04:31:36.131332   18658 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:31:36.141595   18658 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-813000" ...
	I0729 04:31:36.145845   18658 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:31:36.146077   18658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:25:6c:b5:1b:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubernetes-upgrade-813000/disk.qcow2
	I0729 04:31:36.155430   18658 main.go:141] libmachine: STDOUT: 
	I0729 04:31:36.155495   18658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:31:36.155594   18658 fix.go:56] duration metric: took 24.98225ms for fixHost
	I0729 04:31:36.155611   18658 start.go:83] releasing machines lock for "kubernetes-upgrade-813000", held for 25.075375ms
	W0729 04:31:36.155800   18658 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-813000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:31:36.162769   18658 out.go:177] 
	W0729 04:31:36.166820   18658 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:31:36.166861   18658 out.go:239] * 
	* 
	W0729 04:31:36.169317   18658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:31:36.177739   18658 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-813000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-813000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-813000 version --output=json: exit status 1 (63.921125ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-813000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-29 04:31:36.25718 -0700 PDT m=+927.571728376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-813000 -n kubernetes-upgrade-813000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-813000 -n kubernetes-upgrade-813000: exit status 7 (33.252875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-813000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-813000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-813000
--- FAIL: TestKubernetesUpgrade (17.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.94s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19341
- KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1671453638/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.94s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.19s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19341
- KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2077451167/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3442616449 start -p stopped-upgrade-514000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3442616449 start -p stopped-upgrade-514000 --memory=2200 --vm-driver=qemu2 : (40.405920166s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3442616449 -p stopped-upgrade-514000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3442616449 -p stopped-upgrade-514000 stop: (12.105058375s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-514000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-514000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.773204959s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-514000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-514000" primary control-plane node in "stopped-upgrade-514000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-514000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:32:29.820872   18743 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:32:29.821025   18743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:32:29.821029   18743 out.go:304] Setting ErrFile to fd 2...
	I0729 04:32:29.821032   18743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:32:29.821186   18743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:32:29.822330   18743 out.go:298] Setting JSON to false
	I0729 04:32:29.840226   18743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9118,"bootTime":1722243631,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:32:29.840301   18743 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:32:29.846036   18743 out.go:177] * [stopped-upgrade-514000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:32:29.854097   18743 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:32:29.854149   18743 notify.go:220] Checking for updates...
	I0729 04:32:29.863109   18743 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:32:29.867035   18743 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:32:29.870083   18743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:32:29.873092   18743 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:32:29.876028   18743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:32:29.879326   18743 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:32:29.881993   18743 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 04:32:29.885053   18743 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:32:29.888082   18743 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:32:29.893990   18743 start.go:297] selected driver: qemu2
	I0729 04:32:29.893997   18743 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53363 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:32:29.894047   18743 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:32:29.896728   18743 cni.go:84] Creating CNI manager for ""
	I0729 04:32:29.896747   18743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:32:29.896782   18743 start.go:340] cluster config:
	{Name:stopped-upgrade-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53363 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:32:29.896832   18743 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:32:29.905072   18743 out.go:177] * Starting "stopped-upgrade-514000" primary control-plane node in "stopped-upgrade-514000" cluster
	I0729 04:32:29.908968   18743 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:32:29.908985   18743 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0729 04:32:29.908994   18743 cache.go:56] Caching tarball of preloaded images
	I0729 04:32:29.909062   18743 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:32:29.909069   18743 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0729 04:32:29.909116   18743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/config.json ...
	I0729 04:32:29.909472   18743 start.go:360] acquireMachinesLock for stopped-upgrade-514000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:32:29.909508   18743 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "stopped-upgrade-514000"
	I0729 04:32:29.909519   18743 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:32:29.909524   18743 fix.go:54] fixHost starting: 
	I0729 04:32:29.909626   18743 fix.go:112] recreateIfNeeded on stopped-upgrade-514000: state=Stopped err=<nil>
	W0729 04:32:29.909634   18743 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:32:29.917037   18743 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-514000" ...
	I0729 04:32:29.921061   18743 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:32:29.921127   18743 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53329-:22,hostfwd=tcp::53330-:2376,hostname=stopped-upgrade-514000 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/disk.qcow2
	I0729 04:32:29.967282   18743 main.go:141] libmachine: STDOUT: 
	I0729 04:32:29.967311   18743 main.go:141] libmachine: STDERR: 
	I0729 04:32:29.967317   18743 main.go:141] libmachine: Waiting for VM to start (ssh -p 53329 docker@127.0.0.1)...
	I0729 04:32:50.119018   18743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/config.json ...
	I0729 04:32:50.119763   18743 machine.go:94] provisionDockerMachine start ...
	I0729 04:32:50.119971   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.120457   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.120471   18743 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 04:32:50.210542   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 04:32:50.210581   18743 buildroot.go:166] provisioning hostname "stopped-upgrade-514000"
	I0729 04:32:50.210701   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.210960   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.210973   18743 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-514000 && echo "stopped-upgrade-514000" | sudo tee /etc/hostname
	I0729 04:32:50.290770   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-514000
	
	I0729 04:32:50.290836   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.290980   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.290992   18743 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-514000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-514000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-514000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 04:32:50.359169   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 04:32:50.359182   18743 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19341-15486/.minikube CaCertPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19341-15486/.minikube}
	I0729 04:32:50.359190   18743 buildroot.go:174] setting up certificates
	I0729 04:32:50.359195   18743 provision.go:84] configureAuth start
	I0729 04:32:50.359203   18743 provision.go:143] copyHostCerts
	I0729 04:32:50.359280   18743 exec_runner.go:144] found /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.pem, removing ...
	I0729 04:32:50.359286   18743 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.pem
	I0729 04:32:50.359386   18743 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.pem (1078 bytes)
	I0729 04:32:50.359566   18743 exec_runner.go:144] found /Users/jenkins/minikube-integration/19341-15486/.minikube/cert.pem, removing ...
	I0729 04:32:50.359569   18743 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19341-15486/.minikube/cert.pem
	I0729 04:32:50.359629   18743 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19341-15486/.minikube/cert.pem (1123 bytes)
	I0729 04:32:50.360267   18743 exec_runner.go:144] found /Users/jenkins/minikube-integration/19341-15486/.minikube/key.pem, removing ...
	I0729 04:32:50.360270   18743 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19341-15486/.minikube/key.pem
	I0729 04:32:50.360324   18743 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19341-15486/.minikube/key.pem (1675 bytes)
	I0729 04:32:50.360414   18743 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-514000 san=[127.0.0.1 localhost minikube stopped-upgrade-514000]
	I0729 04:32:50.392972   18743 provision.go:177] copyRemoteCerts
	I0729 04:32:50.393019   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 04:32:50.393027   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	I0729 04:32:50.426194   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 04:32:50.432987   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 04:32:50.439415   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 04:32:50.446872   18743 provision.go:87] duration metric: took 87.674375ms to configureAuth
	I0729 04:32:50.446881   18743 buildroot.go:189] setting minikube options for container-runtime
	I0729 04:32:50.446990   18743 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:32:50.447021   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.447100   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.447105   18743 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0729 04:32:50.512045   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0729 04:32:50.512057   18743 buildroot.go:70] root file system type: tmpfs
	I0729 04:32:50.512112   18743 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0729 04:32:50.512159   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.512285   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.512323   18743 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0729 04:32:50.579703   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0729 04:32:50.579765   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:50.579873   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:50.579882   18743 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0729 04:32:50.958866   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0729 04:32:50.958881   18743 machine.go:97] duration metric: took 839.127375ms to provisionDockerMachine
	I0729 04:32:50.958887   18743 start.go:293] postStartSetup for "stopped-upgrade-514000" (driver="qemu2")
	I0729 04:32:50.958894   18743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 04:32:50.958947   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 04:32:50.958956   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	I0729 04:32:50.994215   18743 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 04:32:50.995348   18743 info.go:137] Remote host: Buildroot 2021.02.12
	I0729 04:32:50.995355   18743 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19341-15486/.minikube/addons for local assets ...
	I0729 04:32:50.995444   18743 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19341-15486/.minikube/files for local assets ...
	I0729 04:32:50.995563   18743 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem -> 159732.pem in /etc/ssl/certs
	I0729 04:32:50.995694   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 04:32:50.998509   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem --> /etc/ssl/certs/159732.pem (1708 bytes)
	I0729 04:32:51.005196   18743 start.go:296] duration metric: took 46.305166ms for postStartSetup
	I0729 04:32:51.005210   18743 fix.go:56] duration metric: took 21.096203833s for fixHost
	I0729 04:32:51.005243   18743 main.go:141] libmachine: Using SSH client type: native
	I0729 04:32:51.005343   18743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d22a10] 0x104d25270 <nil>  [] 0s} localhost 53329 <nil> <nil>}
	I0729 04:32:51.005347   18743 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 04:32:51.067678   18743 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722252771.301786004
	
	I0729 04:32:51.067686   18743 fix.go:216] guest clock: 1722252771.301786004
	I0729 04:32:51.067690   18743 fix.go:229] Guest: 2024-07-29 04:32:51.301786004 -0700 PDT Remote: 2024-07-29 04:32:51.005212 -0700 PDT m=+21.211089834 (delta=296.574004ms)
	I0729 04:32:51.067700   18743 fix.go:200] guest clock delta is within tolerance: 296.574004ms
	I0729 04:32:51.067703   18743 start.go:83] releasing machines lock for "stopped-upgrade-514000", held for 21.158709542s
	I0729 04:32:51.067758   18743 ssh_runner.go:195] Run: cat /version.json
	I0729 04:32:51.067772   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	I0729 04:32:51.067761   18743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 04:32:51.067810   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	W0729 04:32:51.068307   18743 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53329: connect: connection refused
	I0729 04:32:51.068328   18743 retry.go:31] will retry after 140.777815ms: dial tcp [::1]:53329: connect: connection refused
	W0729 04:32:51.248954   18743 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0729 04:32:51.249069   18743 ssh_runner.go:195] Run: systemctl --version
	I0729 04:32:51.252094   18743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 04:32:51.254728   18743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 04:32:51.254770   18743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0729 04:32:51.259274   18743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0729 04:32:51.265824   18743 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 04:32:51.265834   18743 start.go:495] detecting cgroup driver to use...
	I0729 04:32:51.265917   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:32:51.275539   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0729 04:32:51.279230   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 04:32:51.282531   18743 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 04:32:51.282554   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 04:32:51.285655   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:32:51.288430   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 04:32:51.291414   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 04:32:51.294801   18743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 04:32:51.298346   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 04:32:51.301505   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 04:32:51.304280   18743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 04:32:51.307436   18743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 04:32:51.310567   18743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 04:32:51.313322   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:51.393888   18743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 04:32:51.399764   18743 start.go:495] detecting cgroup driver to use...
	I0729 04:32:51.399840   18743 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0729 04:32:51.406122   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:32:51.411052   18743 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 04:32:51.418820   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 04:32:51.423560   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:32:51.428094   18743 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 04:32:51.472715   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 04:32:51.477855   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 04:32:51.483205   18743 ssh_runner.go:195] Run: which cri-dockerd
	I0729 04:32:51.484506   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0729 04:32:51.487093   18743 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0729 04:32:51.491900   18743 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0729 04:32:51.569352   18743 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0729 04:32:51.647789   18743 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0729 04:32:51.647853   18743 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0729 04:32:51.653728   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:51.737429   18743 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:32:52.892511   18743 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15509225s)
	I0729 04:32:52.892573   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0729 04:32:52.897336   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:32:52.901372   18743 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0729 04:32:52.989197   18743 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0729 04:32:53.073178   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:53.148512   18743 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0729 04:32:53.154233   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0729 04:32:53.159408   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:53.243581   18743 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0729 04:32:53.283181   18743 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0729 04:32:53.283254   18743 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0729 04:32:53.286550   18743 start.go:563] Will wait 60s for crictl version
	I0729 04:32:53.286603   18743 ssh_runner.go:195] Run: which crictl
	I0729 04:32:53.288134   18743 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 04:32:53.304548   18743 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0729 04:32:53.304624   18743 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:32:53.321233   18743 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0729 04:32:53.342753   18743 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0729 04:32:53.342877   18743 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0729 04:32:53.344354   18743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 04:32:53.347876   18743 kubeadm.go:883] updating cluster {Name:stopped-upgrade-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53363 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0729 04:32:53.347921   18743 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0729 04:32:53.347958   18743 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:32:53.358180   18743 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:32:53.358190   18743 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:32:53.358241   18743 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:32:53.361367   18743 ssh_runner.go:195] Run: which lz4
	I0729 04:32:53.362651   18743 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 04:32:53.363937   18743 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 04:32:53.363948   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0729 04:32:54.273472   18743 docker.go:649] duration metric: took 910.872417ms to copy over tarball
	I0729 04:32:54.273554   18743 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 04:32:55.430392   18743 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.156851917s)
	I0729 04:32:55.430411   18743 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 04:32:55.446475   18743 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0729 04:32:55.449916   18743 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0729 04:32:55.455184   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:55.537388   18743 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0729 04:32:57.145505   18743 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.608139209s)
	I0729 04:32:57.145607   18743 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0729 04:32:57.159068   18743 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0729 04:32:57.159077   18743 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0729 04:32:57.159082   18743 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 04:32:57.163502   18743 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:32:57.165356   18743 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:32:57.167408   18743 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:32:57.167595   18743 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:32:57.169460   18743 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:32:57.169554   18743 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:32:57.171059   18743 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:32:57.171079   18743 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:32:57.172146   18743 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 04:32:57.172212   18743 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:32:57.173506   18743 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:32:57.173509   18743 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:32:57.174332   18743 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:32:57.174859   18743 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 04:32:57.176054   18743 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:32:57.176633   18743 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:32:57.580018   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:32:57.592423   18743 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0729 04:32:57.592447   18743 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:32:57.592502   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0729 04:32:57.592994   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:32:57.602117   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:32:57.609119   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 04:32:57.609898   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0729 04:32:57.616707   18743 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0729 04:32:57.616729   18743 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:32:57.616783   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0729 04:32:57.618797   18743 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0729 04:32:57.618809   18743 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:32:57.618836   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0729 04:32:57.620879   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 04:32:57.624162   18743 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0729 04:32:57.624179   18743 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 04:32:57.624218   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0729 04:32:57.635764   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0729 04:32:57.644967   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0729 04:32:57.645002   18743 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0729 04:32:57.645016   18743 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0729 04:32:57.645063   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0729 04:32:57.647434   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0729 04:32:57.655511   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0729 04:32:57.655609   18743 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0729 04:32:57.655627   18743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0729 04:32:57.655706   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:32:57.666141   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:32:57.666147   18743 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0729 04:32:57.666159   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0729 04:32:57.666173   18743 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0729 04:32:57.666187   18743 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:32:57.666215   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 04:32:57.673375   18743 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 04:32:57.673387   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0729 04:32:57.689996   18743 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0729 04:32:57.690027   18743 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:32:57.690086   18743 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0729 04:32:57.690095   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 04:32:57.690188   18743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:32:57.717634   18743 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0729 04:32:57.722125   18743 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0729 04:32:57.722165   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0729 04:32:57.722213   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0729 04:32:57.760079   18743 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 04:32:57.760102   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0729 04:32:57.777948   18743 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0729 04:32:57.778070   18743 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:32:57.805486   18743 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 04:32:57.805528   18743 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0729 04:32:57.805547   18743 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:32:57.805602   18743 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:32:57.821270   18743 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 04:32:57.821376   18743 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:32:57.822676   18743 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0729 04:32:57.822688   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0729 04:32:57.852315   18743 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 04:32:57.852329   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0729 04:32:58.095287   18743 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 04:32:58.095328   18743 cache_images.go:92] duration metric: took 936.263042ms to LoadCachedImages
	W0729 04:32:58.095365   18743 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0729 04:32:58.095371   18743 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0729 04:32:58.095422   18743 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-514000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 04:32:58.095487   18743 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0729 04:32:58.109231   18743 cni.go:84] Creating CNI manager for ""
	I0729 04:32:58.109243   18743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:32:58.109247   18743 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 04:32:58.109258   18743 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-514000 NodeName:stopped-upgrade-514000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 04:32:58.109321   18743 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-514000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 04:32:58.109373   18743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0729 04:32:58.112177   18743 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 04:32:58.112211   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 04:32:58.115005   18743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0729 04:32:58.120105   18743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 04:32:58.124898   18743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0729 04:32:58.129939   18743 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0729 04:32:58.131191   18743 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 04:32:58.135194   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:32:58.212601   18743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:32:58.217817   18743 certs.go:68] Setting up /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000 for IP: 10.0.2.15
	I0729 04:32:58.217824   18743 certs.go:194] generating shared ca certs ...
	I0729 04:32:58.217832   18743 certs.go:226] acquiring lock for ca certs: {Name:mkdf1894d8f9d5e3cc3aa4d0030f6ecce44e63f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:32:58.217990   18743 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.key
	I0729 04:32:58.218040   18743 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/proxy-client-ca.key
	I0729 04:32:58.218049   18743 certs.go:256] generating profile certs ...
	I0729 04:32:58.218126   18743 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/client.key
	I0729 04:32:58.218144   18743 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key.6bbbaa9e
	I0729 04:32:58.218152   18743 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt.6bbbaa9e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0729 04:32:58.263911   18743 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt.6bbbaa9e ...
	I0729 04:32:58.263935   18743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt.6bbbaa9e: {Name:mk4226757e478e05e8081a6bd878cc84b87db3ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:32:58.264324   18743 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key.6bbbaa9e ...
	I0729 04:32:58.264333   18743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key.6bbbaa9e: {Name:mk9a6a66f7f3c7a6e0dd1d2799911a4a1764b4a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:32:58.264474   18743 certs.go:381] copying /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt.6bbbaa9e -> /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt
	I0729 04:32:58.264625   18743 certs.go:385] copying /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key.6bbbaa9e -> /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key
	I0729 04:32:58.264790   18743 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/proxy-client.key
	I0729 04:32:58.264926   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/15973.pem (1338 bytes)
	W0729 04:32:58.264956   18743 certs.go:480] ignoring /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/15973_empty.pem, impossibly tiny 0 bytes
	I0729 04:32:58.264961   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca-key.pem (1679 bytes)
	I0729 04:32:58.264980   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem (1078 bytes)
	I0729 04:32:58.264997   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem (1123 bytes)
	I0729 04:32:58.265015   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/key.pem (1675 bytes)
	I0729 04:32:58.265053   18743 certs.go:484] found cert: /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem (1708 bytes)
	I0729 04:32:58.265417   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 04:32:58.272374   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 04:32:58.279190   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 04:32:58.286504   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 04:32:58.294473   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 04:32:58.301244   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 04:32:58.308225   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 04:32:58.315265   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 04:32:58.322603   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 04:32:58.329618   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/15973.pem --> /usr/share/ca-certificates/15973.pem (1338 bytes)
	I0729 04:32:58.336238   18743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/ssl/certs/159732.pem --> /usr/share/ca-certificates/159732.pem (1708 bytes)
	I0729 04:32:58.342953   18743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 04:32:58.348188   18743 ssh_runner.go:195] Run: openssl version
	I0729 04:32:58.349904   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 04:32:58.352696   18743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:32:58.354022   18743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 11:28 /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:32:58.354040   18743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 04:32:58.355809   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 04:32:58.359057   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15973.pem && ln -fs /usr/share/ca-certificates/15973.pem /etc/ssl/certs/15973.pem"
	I0729 04:32:58.362443   18743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15973.pem
	I0729 04:32:58.363929   18743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 11:17 /usr/share/ca-certificates/15973.pem
	I0729 04:32:58.363949   18743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15973.pem
	I0729 04:32:58.365845   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15973.pem /etc/ssl/certs/51391683.0"
	I0729 04:32:58.368609   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159732.pem && ln -fs /usr/share/ca-certificates/159732.pem /etc/ssl/certs/159732.pem"
	I0729 04:32:58.371773   18743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159732.pem
	I0729 04:32:58.373227   18743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 11:17 /usr/share/ca-certificates/159732.pem
	I0729 04:32:58.373250   18743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159732.pem
	I0729 04:32:58.374931   18743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/159732.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 04:32:58.378255   18743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 04:32:58.379817   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 04:32:58.382052   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 04:32:58.383894   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 04:32:58.385822   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 04:32:58.387608   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 04:32:58.389378   18743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 04:32:58.391082   18743 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53363 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0729 04:32:58.391151   18743 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:32:58.401210   18743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 04:32:58.404516   18743 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 04:32:58.404522   18743 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 04:32:58.404543   18743 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 04:32:58.407319   18743 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 04:32:58.407633   18743 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-514000" does not appear in /Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:32:58.407732   18743 kubeconfig.go:62] /Users/jenkins/minikube-integration/19341-15486/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-514000" cluster setting kubeconfig missing "stopped-upgrade-514000" context setting]
	I0729 04:32:58.407927   18743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/kubeconfig: {Name:mk01c5aa9060b104010e51a5796278cdf7a7a206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:32:58.408550   18743 kapi.go:59] client config for stopped-upgrade-514000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/client.key", CAFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060b8080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:32:58.408882   18743 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 04:32:58.411517   18743 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-514000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0729 04:32:58.411523   18743 kubeadm.go:1160] stopping kube-system containers ...
	I0729 04:32:58.411565   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0729 04:32:58.423405   18743 docker.go:483] Stopping containers: [fb1260acc22b d3755a4fce21 c0c4385482f6 f6ecb8618d59 36af8e90410c 565a0b2bf32c 43bffe5a5082 dfd3430538d4]
	I0729 04:32:58.423467   18743 ssh_runner.go:195] Run: docker stop fb1260acc22b d3755a4fce21 c0c4385482f6 f6ecb8618d59 36af8e90410c 565a0b2bf32c 43bffe5a5082 dfd3430538d4
	I0729 04:32:58.434163   18743 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 04:32:58.439506   18743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:32:58.442769   18743 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:32:58.442780   18743 kubeadm.go:157] found existing configuration files:
	
	I0729 04:32:58.442804   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/admin.conf
	I0729 04:32:58.445726   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:32:58.445755   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:32:58.448312   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/kubelet.conf
	I0729 04:32:58.450980   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:32:58.451006   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:32:58.453988   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/controller-manager.conf
	I0729 04:32:58.456444   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:32:58.456462   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:32:58.459116   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/scheduler.conf
	I0729 04:32:58.462079   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:32:58.462102   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:32:58.464643   18743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:32:58.467340   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:32:58.489698   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:32:58.843284   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:32:58.968689   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:32:58.995658   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 04:32:59.015900   18743 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:32:59.015983   18743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:32:59.518027   18743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:33:00.018044   18743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:33:00.022473   18743 api_server.go:72] duration metric: took 1.006598167s to wait for apiserver process to appear ...
	I0729 04:33:00.022482   18743 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:33:00.022493   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:05.023889   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:05.023955   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:10.024356   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:10.024409   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:15.024719   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:15.024766   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:20.025140   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:20.025183   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:25.025577   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:25.025598   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:30.026119   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:30.026167   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:35.027220   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:35.027283   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:40.028543   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:40.028616   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:45.030286   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:45.030331   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:50.032239   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:50.032337   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:33:55.034668   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:33:55.034717   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:00.037157   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:00.037552   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:00.062783   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:00.062899   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:00.077789   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:00.077886   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:00.091367   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:00.091461   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:00.103758   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:00.103834   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:00.115045   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:00.115115   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:00.125629   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:00.125701   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:00.139101   18743 logs.go:276] 0 containers: []
	W0729 04:34:00.139114   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:00.139170   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:00.150167   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:00.150184   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:00.150190   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:00.164332   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:00.164344   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:00.194627   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:00.194639   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:00.206405   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:00.206418   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:00.245445   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:00.245456   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:00.257920   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:00.257931   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:00.271070   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:00.271088   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:00.282308   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:00.282320   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:00.298839   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:00.298850   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:00.310922   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:00.310933   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:00.335402   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:00.335412   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:00.361749   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:00.361763   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:00.373559   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:00.373573   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:00.385274   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:00.385287   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:00.389664   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:00.389670   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:00.494010   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:00.494022   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:00.510377   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:00.510387   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:03.028251   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:08.029115   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:08.029315   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:08.053135   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:08.053226   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:08.074255   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:08.074330   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:08.084850   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:08.084911   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:08.095498   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:08.095574   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:08.106157   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:08.106226   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:08.116518   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:08.116592   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:08.128561   18743 logs.go:276] 0 containers: []
	W0729 04:34:08.128577   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:08.128633   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:08.139160   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:08.139177   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:08.139183   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:08.143578   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:08.143584   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:08.167733   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:08.167746   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:08.182643   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:08.182654   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:08.203523   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:08.203542   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:08.221574   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:08.221585   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:08.235273   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:08.235284   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:08.251366   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:08.251377   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:08.288613   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:08.288623   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:08.324613   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:08.324626   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:08.338354   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:08.338368   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:08.350207   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:08.350220   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:08.364986   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:08.364996   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:08.377038   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:08.377049   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:08.388968   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:08.388983   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:08.401004   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:08.401018   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:08.412965   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:08.412977   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:10.940372   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:15.942548   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:15.942739   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:15.958358   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:15.958441   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:15.970883   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:15.970966   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:15.981849   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:15.981928   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:15.992132   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:15.992204   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:16.002550   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:16.002615   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:16.013166   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:16.013237   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:16.023486   18743 logs.go:276] 0 containers: []
	W0729 04:34:16.023497   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:16.023557   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:16.033938   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:16.033954   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:16.033959   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:16.059152   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:16.059162   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:16.070185   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:16.070196   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:16.082188   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:16.082200   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:16.094887   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:16.094899   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:16.109283   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:16.109293   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:16.121766   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:16.121779   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:16.133396   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:16.133407   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:16.149011   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:16.149022   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:16.183360   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:16.183371   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:16.188072   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:16.188079   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:16.208332   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:16.208342   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:16.227856   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:16.227866   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:16.239624   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:16.239638   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:16.257962   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:16.257972   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:16.269288   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:16.269300   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:16.293404   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:16.293414   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:18.833164   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:23.835247   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:23.835409   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:23.847534   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:23.847607   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:23.857966   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:23.858027   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:23.868277   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:23.868339   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:23.878783   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:23.878846   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:23.889263   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:23.889325   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:23.899901   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:23.899963   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:23.911772   18743 logs.go:276] 0 containers: []
	W0729 04:34:23.911784   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:23.911839   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:23.922627   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:23.922641   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:23.922646   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:23.936746   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:23.936759   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:23.955888   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:23.955899   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:23.966600   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:23.966609   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:23.990371   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:23.990378   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:24.002046   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:24.002054   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:24.038554   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:24.038565   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:24.064182   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:24.064194   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:24.078525   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:24.078538   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:24.093388   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:24.093400   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:24.105230   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:24.105245   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:24.122775   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:24.122787   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:24.127180   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:24.127186   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:24.139000   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:24.139014   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:24.176935   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:24.176950   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:24.188162   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:24.188175   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:24.200779   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:24.200792   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:26.714955   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:31.717288   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:31.717475   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:31.738580   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:31.738676   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:31.761845   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:31.761924   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:31.778441   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:31.778505   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:31.789031   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:31.789096   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:31.802908   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:31.802968   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:31.819385   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:31.819456   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:31.829965   18743 logs.go:276] 0 containers: []
	W0729 04:34:31.829980   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:31.830032   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:31.840948   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:31.840969   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:31.840974   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:31.855683   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:31.855698   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:31.868991   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:31.869005   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:31.882235   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:31.882245   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:31.893791   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:31.893802   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:31.917015   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:31.917023   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:31.928561   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:31.928571   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:31.952141   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:31.952150   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:31.986280   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:31.986291   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:32.010502   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:32.010514   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:32.014690   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:32.014698   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:32.028333   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:32.028344   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:32.041887   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:32.041899   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:32.057085   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:32.057098   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:32.095384   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:32.095395   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:32.107317   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:32.107333   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:32.124973   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:32.124984   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:34.638476   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:39.640245   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:39.640438   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:39.665843   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:39.665965   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:39.684577   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:39.684657   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:39.697770   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:39.697848   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:39.713360   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:39.713432   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:39.723530   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:39.723602   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:39.738050   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:39.738122   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:39.747990   18743 logs.go:276] 0 containers: []
	W0729 04:34:39.748002   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:39.748056   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:39.758669   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:39.758692   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:39.758697   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:39.771538   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:39.771550   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:39.783242   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:39.783255   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:39.795596   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:39.795608   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:39.820145   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:39.820159   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:39.857458   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:39.857471   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:39.872895   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:39.872906   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:39.885005   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:39.885017   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:39.916183   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:39.916194   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:39.930331   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:39.930346   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:39.942755   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:39.942774   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:39.962251   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:39.962261   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:39.966237   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:39.966244   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:40.007359   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:40.007371   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:40.021721   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:40.021732   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:40.040494   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:40.040505   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:40.052013   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:40.052024   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:42.566270   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:47.568766   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:47.568973   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:47.591222   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:47.591335   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:47.605969   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:47.606046   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:47.618497   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:47.618564   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:47.629197   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:47.629263   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:47.640509   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:47.640572   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:47.651312   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:47.651380   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:47.662461   18743 logs.go:276] 0 containers: []
	W0729 04:34:47.662472   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:47.662532   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:47.673198   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:47.673216   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:47.673222   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:47.687053   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:47.687064   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:47.699281   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:47.699293   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:47.716799   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:47.716813   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:47.729580   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:47.729592   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:47.767268   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:47.767276   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:47.771999   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:47.772009   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:47.809278   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:47.809290   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:47.821186   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:47.821196   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:47.836313   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:47.836321   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:47.850499   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:47.850509   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:47.864027   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:47.864043   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:47.894647   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:47.894667   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:47.919640   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:47.919651   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:47.933860   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:47.933872   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:47.945140   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:47.945152   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:47.956729   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:47.956741   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:50.473848   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:34:55.475954   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:34:55.476110   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:34:55.490996   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:34:55.491069   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:34:55.504682   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:34:55.504799   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:34:55.515853   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:34:55.515922   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:34:55.526510   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:34:55.526570   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:34:55.537037   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:34:55.537099   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:34:55.551232   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:34:55.551306   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:34:55.561440   18743 logs.go:276] 0 containers: []
	W0729 04:34:55.561452   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:34:55.561504   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:34:55.571621   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:34:55.571640   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:34:55.571647   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:34:55.589533   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:34:55.589544   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:34:55.605152   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:34:55.605163   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:34:55.616897   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:34:55.616911   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:34:55.630307   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:34:55.630322   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:34:55.651935   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:34:55.651947   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:34:55.677796   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:34:55.677808   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:34:55.715175   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:34:55.715185   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:34:55.751620   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:34:55.751632   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:34:55.778220   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:34:55.778233   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:34:55.789907   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:34:55.789919   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:34:55.802290   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:34:55.802301   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:34:55.808555   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:34:55.808564   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:34:55.823150   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:34:55.823164   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:34:55.834373   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:34:55.834385   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:34:55.847303   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:34:55.847315   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:34:55.863034   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:34:55.863044   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:34:58.382205   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:03.384442   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:03.384615   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:03.405366   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:03.405448   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:03.418811   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:03.418876   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:03.429636   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:03.429705   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:03.439900   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:03.439964   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:03.450821   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:03.450887   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:03.461273   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:03.461339   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:03.471327   18743 logs.go:276] 0 containers: []
	W0729 04:35:03.471340   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:03.471396   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:03.485968   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:03.485985   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:03.485991   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:03.497423   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:03.497435   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:03.522080   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:03.522091   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:03.559689   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:03.559700   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:03.573293   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:03.573303   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:03.595290   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:03.595303   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:03.610127   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:03.610139   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:03.621544   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:03.621554   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:03.638981   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:03.638995   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:03.651299   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:03.651310   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:03.663091   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:03.663106   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:03.667656   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:03.667663   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:03.692149   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:03.692160   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:03.703675   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:03.703689   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:03.718645   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:03.718655   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:03.732083   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:03.732095   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:03.769025   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:03.769036   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:06.285934   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:11.288207   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:11.288363   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:11.309570   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:11.309663   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:11.324820   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:11.324901   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:11.337474   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:11.337547   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:11.348571   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:11.348651   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:11.359348   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:11.359414   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:11.370120   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:11.370189   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:11.380430   18743 logs.go:276] 0 containers: []
	W0729 04:35:11.380442   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:11.380501   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:11.391032   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:11.391051   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:11.391057   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:11.404785   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:11.404795   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:11.416195   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:11.416207   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:11.427113   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:11.427122   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:11.439026   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:11.439037   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:11.450810   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:11.450822   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:11.472493   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:11.472504   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:11.485108   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:11.485119   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:11.520887   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:11.520898   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:11.556578   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:11.556589   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:11.580933   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:11.580943   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:11.595253   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:11.595263   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:11.609325   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:11.609336   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:11.621441   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:11.621453   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:11.625604   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:11.625612   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:11.639576   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:11.639585   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:11.655311   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:11.655322   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:14.181845   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:19.184006   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:19.184114   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:19.195365   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:19.195440   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:19.206257   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:19.206353   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:19.217014   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:19.217084   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:19.227309   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:19.227373   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:19.237967   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:19.238037   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:19.248446   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:19.248514   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:19.258551   18743 logs.go:276] 0 containers: []
	W0729 04:35:19.258562   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:19.258621   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:19.279895   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:19.279913   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:19.279918   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:19.318263   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:19.318281   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:19.346575   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:19.346604   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:19.364045   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:19.364059   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:19.404037   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:19.404048   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:19.418682   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:19.418696   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:19.443950   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:19.443960   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:19.461697   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:19.461711   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:19.473430   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:19.473442   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:19.488393   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:19.488405   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:19.502426   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:19.502443   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:19.514006   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:19.514018   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:19.532721   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:19.532734   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:19.544679   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:19.544690   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:19.548535   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:19.548541   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:19.560390   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:19.560400   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:19.572567   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:19.572578   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:22.085903   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:27.088035   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:27.088217   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:27.106012   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:27.106108   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:27.121519   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:27.121596   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:27.139607   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:27.139678   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:27.149694   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:27.149768   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:27.160427   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:27.160493   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:27.170800   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:27.170867   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:27.180766   18743 logs.go:276] 0 containers: []
	W0729 04:35:27.180780   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:27.180837   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:27.191509   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:27.191526   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:27.191532   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:27.195795   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:27.195803   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:27.207740   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:27.207753   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:27.226653   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:27.226664   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:27.238968   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:27.238980   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:27.250268   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:27.250279   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:27.262029   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:27.262040   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:27.279830   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:27.279847   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:27.305824   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:27.305835   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:27.324586   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:27.324597   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:27.336102   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:27.336114   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:27.347966   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:27.347976   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:27.371061   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:27.371073   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:27.390331   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:27.390344   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:27.401345   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:27.401356   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:27.440573   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:27.440585   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:27.476591   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:27.476604   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:29.991971   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:34.994353   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:34.994674   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:35.027102   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:35.027238   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:35.046424   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:35.046523   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:35.060660   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:35.060736   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:35.074107   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:35.074187   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:35.085005   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:35.085079   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:35.097116   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:35.097185   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:35.108414   18743 logs.go:276] 0 containers: []
	W0729 04:35:35.108427   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:35.108488   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:35.119684   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:35.119702   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:35.119706   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:35.131265   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:35.131276   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:35.145408   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:35.145418   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:35.168726   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:35.168736   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:35.183390   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:35.183401   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:35.200290   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:35.200302   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:35.217243   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:35.217255   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:35.241478   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:35.241492   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:35.253111   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:35.253123   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:35.257620   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:35.257626   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:35.294165   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:35.294180   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:35.309146   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:35.309156   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:35.336618   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:35.336630   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:35.351359   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:35.351370   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:35.366509   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:35.366520   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:35.378570   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:35.378583   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:35.390276   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:35.390286   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:37.929192   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:42.931650   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:42.931832   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:42.944554   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:42.944629   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:42.955799   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:42.955872   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:42.966313   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:42.966381   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:42.977054   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:42.977132   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:42.992515   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:42.992586   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:43.003656   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:43.003728   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:43.013332   18743 logs.go:276] 0 containers: []
	W0729 04:35:43.013352   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:43.013410   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:43.023671   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:43.023697   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:43.023703   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:43.028008   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:43.028017   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:43.052328   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:43.052340   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:43.066065   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:43.066076   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:43.077813   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:43.077827   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:43.095896   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:43.095908   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:43.110198   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:43.110211   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:43.122553   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:43.122565   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:43.162444   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:43.162463   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:43.198715   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:43.198727   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:43.209618   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:43.209627   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:43.222358   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:43.222370   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:43.236711   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:43.236721   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:43.252540   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:43.252550   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:43.268231   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:43.268241   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:43.280530   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:43.280542   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:43.292611   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:43.292623   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:45.819327   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:50.821531   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:50.821782   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:50.843484   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:50.843588   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:50.863170   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:50.863242   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:50.874767   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:50.874843   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:50.886269   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:50.886344   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:50.896988   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:50.897058   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:50.908173   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:50.908242   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:50.919661   18743 logs.go:276] 0 containers: []
	W0729 04:35:50.919675   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:50.919738   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:50.930462   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:50.930480   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:50.930485   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:50.969839   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:50.969848   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:50.982552   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:50.982565   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:51.001233   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:51.001244   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:35:51.012857   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:51.012868   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:51.035950   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:51.035957   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:51.049766   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:51.049776   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:51.074206   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:51.074217   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:51.089170   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:51.089181   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:51.100787   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:51.100799   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:51.112636   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:51.112648   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:51.124859   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:51.124872   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:51.166239   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:51.166254   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:51.180764   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:51.180777   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:51.197744   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:51.197756   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:51.210347   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:51.210360   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:51.214623   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:51.214630   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:53.728189   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:35:58.730709   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:35:58.730909   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:35:58.747541   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:35:58.747628   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:35:58.760974   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:35:58.761049   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:35:58.772174   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:35:58.772246   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:35:58.783507   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:35:58.783577   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:35:58.793901   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:35:58.793969   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:35:58.808744   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:35:58.808819   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:35:58.819188   18743 logs.go:276] 0 containers: []
	W0729 04:35:58.819201   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:35:58.819261   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:35:58.829247   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:35:58.829267   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:35:58.829273   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:35:58.841955   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:35:58.841967   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:35:58.860076   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:35:58.860090   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:35:58.872589   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:35:58.872601   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:35:58.884537   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:35:58.884547   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:35:58.900846   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:35:58.900861   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:35:58.912072   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:35:58.912084   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:35:58.923877   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:35:58.923893   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:35:58.961284   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:35:58.961292   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:35:58.997444   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:35:58.997460   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:35:59.011892   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:35:59.011907   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:35:59.034272   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:35:59.034283   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:35:59.056898   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:35:59.056905   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:35:59.061061   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:35:59.061067   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:35:59.086397   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:35:59.086409   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:35:59.101102   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:35:59.101114   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:35:59.113210   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:35:59.113221   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:01.630161   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:06.632534   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:06.632778   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:06.659178   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:06.659290   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:06.677328   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:06.677405   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:06.690351   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:06.690425   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:06.702085   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:06.702161   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:06.713035   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:06.713103   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:06.726106   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:06.726178   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:06.741133   18743 logs.go:276] 0 containers: []
	W0729 04:36:06.741144   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:06.741200   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:06.758142   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:06.758159   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:06.758164   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:06.783079   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:06.783092   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:06.796135   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:06.796145   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:06.808183   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:06.808197   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:06.843825   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:06.843841   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:06.860558   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:06.860575   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:06.872969   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:06.872980   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:06.885272   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:06.885286   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:06.923291   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:06.923304   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:06.938806   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:06.938816   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:06.962580   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:06.962592   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:06.975344   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:06.975358   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:07.000459   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:07.000469   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:07.014860   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:07.014872   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:07.029061   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:07.029073   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:07.040559   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:07.040572   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:07.052218   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:07.052230   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:09.558766   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:14.560944   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:14.561079   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:14.576515   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:14.576595   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:14.594077   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:14.594146   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:14.604587   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:14.604676   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:14.615970   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:14.616047   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:14.626846   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:14.626918   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:14.638122   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:14.638190   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:14.648410   18743 logs.go:276] 0 containers: []
	W0729 04:36:14.648420   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:14.648473   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:14.659168   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:14.659185   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:14.659191   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:14.670393   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:14.670406   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:14.685722   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:14.685732   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:14.697286   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:14.697298   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:14.711435   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:14.711446   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:14.726914   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:14.726926   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:14.739523   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:14.739537   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:14.750994   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:14.751005   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:14.789459   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:14.789471   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:14.819602   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:14.819615   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:14.838287   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:14.838302   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:14.862081   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:14.862089   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:14.875018   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:14.875030   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:14.879747   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:14.879754   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:14.915777   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:14.915789   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:14.930135   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:14.930146   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:14.942114   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:14.942128   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:17.465334   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:22.467681   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:22.467815   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:22.509347   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:22.509438   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:22.528501   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:22.528574   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:22.550602   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:22.550674   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:22.561900   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:22.561966   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:22.572649   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:22.572720   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:22.582798   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:22.582860   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:22.593317   18743 logs.go:276] 0 containers: []
	W0729 04:36:22.593329   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:22.593382   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:22.603629   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:22.603645   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:22.603650   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:22.608357   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:22.608364   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:22.622692   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:22.622703   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:22.637551   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:22.637562   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:22.651353   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:22.651365   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:22.663169   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:22.663181   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:22.677403   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:22.677414   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:22.702852   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:22.702864   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:22.714800   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:22.714812   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:22.731941   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:22.731953   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:22.744327   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:22.744339   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:22.757832   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:22.757842   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:22.768965   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:22.768976   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:22.792954   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:22.792963   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:22.829562   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:22.829572   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:22.863804   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:22.863815   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:22.882743   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:22.882755   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:25.396620   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:30.397151   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:30.397339   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:30.412773   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:30.412857   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:30.425293   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:30.425353   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:30.436136   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:30.436205   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:30.446863   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:30.446935   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:30.460596   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:30.460670   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:30.478529   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:30.478599   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:30.488389   18743 logs.go:276] 0 containers: []
	W0729 04:36:30.488404   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:30.488459   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:30.498646   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:30.498663   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:30.498669   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:30.536895   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:30.536911   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:30.553510   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:30.553522   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:30.569107   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:30.569118   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:30.580907   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:30.580918   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:30.598397   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:30.598410   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:30.610031   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:30.610041   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:30.622384   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:30.622395   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:30.659658   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:30.659671   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:30.684993   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:30.685007   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:30.706822   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:30.706834   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:30.730239   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:30.730253   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:30.735004   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:30.735011   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:30.749479   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:30.749490   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:30.761433   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:30.761444   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:30.776613   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:30.776625   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:30.792372   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:30.792386   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:33.308742   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:38.310983   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:38.311166   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:38.327324   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:38.327410   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:38.339941   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:38.340009   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:38.350813   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:38.350889   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:38.369001   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:38.369066   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:38.379382   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:38.379442   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:38.393529   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:38.393599   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:38.403847   18743 logs.go:276] 0 containers: []
	W0729 04:36:38.403860   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:38.403916   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:38.415255   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:38.415272   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:38.415278   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:38.451013   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:38.451029   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:38.464789   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:38.464800   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:38.478552   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:38.478563   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:38.493430   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:38.493444   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:38.505940   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:38.505950   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:38.510312   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:38.510318   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:38.523617   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:38.523629   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:38.534967   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:38.534979   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:38.571620   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:38.571636   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:38.585530   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:38.585543   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:38.609404   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:38.609420   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:38.636807   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:38.636818   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:38.651585   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:38.651600   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:38.663709   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:38.663721   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:38.675911   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:38.675922   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:38.699536   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:38.699547   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:41.214451   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:46.216701   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:46.216929   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:46.237656   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:46.237751   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:46.252203   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:46.252274   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:46.264158   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:46.264227   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:46.275286   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:46.275408   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:46.286251   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:46.286313   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:46.297164   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:46.297223   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:46.307551   18743 logs.go:276] 0 containers: []
	W0729 04:36:46.307561   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:46.307614   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:46.317821   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:46.317845   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:46.317850   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:46.334537   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:46.334552   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:46.352179   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:46.352194   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:46.367317   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:46.367328   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:46.391832   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:46.391847   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:46.405206   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:46.405221   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:46.416404   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:46.416416   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:46.428802   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:46.428812   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:46.441860   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:46.441874   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:46.464577   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:46.464585   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:46.468486   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:46.468492   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:46.481920   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:46.481930   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:46.496191   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:46.496205   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:46.507451   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:46.507462   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:46.519682   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:46.519696   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:46.557114   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:46.557127   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:46.579576   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:46.579590   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:49.119608   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:36:54.121913   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:36:54.122277   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:36:54.157851   18743 logs.go:276] 2 containers: [bd4857b46b80 fb1260acc22b]
	I0729 04:36:54.157999   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:36:54.176721   18743 logs.go:276] 2 containers: [51e4efdc109b d3755a4fce21]
	I0729 04:36:54.176820   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:36:54.196220   18743 logs.go:276] 1 containers: [adf6dc10da28]
	I0729 04:36:54.196295   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:36:54.220867   18743 logs.go:276] 2 containers: [d73004ba6137 f6ecb8618d59]
	I0729 04:36:54.220944   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:36:54.232139   18743 logs.go:276] 1 containers: [aead60b2c4e9]
	I0729 04:36:54.232204   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:36:54.245675   18743 logs.go:276] 2 containers: [d72df3d76a6d 36af8e90410c]
	I0729 04:36:54.245741   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:36:54.255839   18743 logs.go:276] 0 containers: []
	W0729 04:36:54.255856   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:36:54.255917   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:36:54.266857   18743 logs.go:276] 2 containers: [2683d1a1509f 313e03545663]
	I0729 04:36:54.266876   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:36:54.266884   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:36:54.307759   18743 logs.go:123] Gathering logs for kube-apiserver [fb1260acc22b] ...
	I0729 04:36:54.307774   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb1260acc22b"
	I0729 04:36:54.333785   18743 logs.go:123] Gathering logs for etcd [d3755a4fce21] ...
	I0729 04:36:54.333797   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3755a4fce21"
	I0729 04:36:54.348749   18743 logs.go:123] Gathering logs for kube-proxy [aead60b2c4e9] ...
	I0729 04:36:54.348759   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aead60b2c4e9"
	I0729 04:36:54.360563   18743 logs.go:123] Gathering logs for etcd [51e4efdc109b] ...
	I0729 04:36:54.360573   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e4efdc109b"
	I0729 04:36:54.375131   18743 logs.go:123] Gathering logs for coredns [adf6dc10da28] ...
	I0729 04:36:54.375173   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf6dc10da28"
	I0729 04:36:54.386723   18743 logs.go:123] Gathering logs for kube-scheduler [d73004ba6137] ...
	I0729 04:36:54.386736   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d73004ba6137"
	I0729 04:36:54.405543   18743 logs.go:123] Gathering logs for kube-scheduler [f6ecb8618d59] ...
	I0729 04:36:54.405553   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6ecb8618d59"
	I0729 04:36:54.421688   18743 logs.go:123] Gathering logs for kube-controller-manager [36af8e90410c] ...
	I0729 04:36:54.421701   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36af8e90410c"
	I0729 04:36:54.434560   18743 logs.go:123] Gathering logs for storage-provisioner [313e03545663] ...
	I0729 04:36:54.434573   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313e03545663"
	I0729 04:36:54.446315   18743 logs.go:123] Gathering logs for kube-apiserver [bd4857b46b80] ...
	I0729 04:36:54.446327   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd4857b46b80"
	I0729 04:36:54.459843   18743 logs.go:123] Gathering logs for kube-controller-manager [d72df3d76a6d] ...
	I0729 04:36:54.459857   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d72df3d76a6d"
	I0729 04:36:54.477936   18743 logs.go:123] Gathering logs for storage-provisioner [2683d1a1509f] ...
	I0729 04:36:54.477949   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2683d1a1509f"
	I0729 04:36:54.490214   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:36:54.490228   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:36:54.527087   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:36:54.527096   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:36:54.531731   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:36:54.531738   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:36:54.554698   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:36:54.554705   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:36:57.068829   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:02.071161   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:02.071230   18743 kubeadm.go:597] duration metric: took 4m3.672683541s to restartPrimaryControlPlane
	W0729 04:37:02.071286   18743 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 04:37:02.071314   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0729 04:37:03.101472   18743 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.030169916s)
	I0729 04:37:03.101553   18743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 04:37:03.106417   18743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 04:37:03.109252   18743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 04:37:03.111784   18743 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 04:37:03.111790   18743 kubeadm.go:157] found existing configuration files:
	
	I0729 04:37:03.111815   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/admin.conf
	I0729 04:37:03.114821   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 04:37:03.114843   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 04:37:03.118325   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/kubelet.conf
	I0729 04:37:03.121035   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 04:37:03.121060   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 04:37:03.123547   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/controller-manager.conf
	I0729 04:37:03.126130   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 04:37:03.126152   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 04:37:03.129006   18743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/scheduler.conf
	I0729 04:37:03.131536   18743 kubeadm.go:163] "https://control-plane.minikube.internal:53363" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53363 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 04:37:03.131561   18743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 04:37:03.134717   18743 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 04:37:03.151909   18743 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0729 04:37:03.152010   18743 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 04:37:03.201599   18743 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 04:37:03.201691   18743 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 04:37:03.201764   18743 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 04:37:03.250769   18743 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 04:37:03.258943   18743 out.go:204]   - Generating certificates and keys ...
	I0729 04:37:03.258978   18743 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 04:37:03.259010   18743 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 04:37:03.259056   18743 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 04:37:03.259089   18743 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 04:37:03.259122   18743 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 04:37:03.259158   18743 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 04:37:03.259200   18743 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 04:37:03.259235   18743 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 04:37:03.259275   18743 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 04:37:03.259322   18743 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 04:37:03.259340   18743 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 04:37:03.259367   18743 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 04:37:03.497224   18743 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 04:37:03.630617   18743 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 04:37:03.683596   18743 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 04:37:03.720522   18743 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 04:37:03.751567   18743 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 04:37:03.752390   18743 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 04:37:03.752429   18743 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 04:37:03.839435   18743 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 04:37:03.842603   18743 out.go:204]   - Booting up control plane ...
	I0729 04:37:03.842660   18743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 04:37:03.842709   18743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 04:37:03.842788   18743 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 04:37:03.842862   18743 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 04:37:03.842988   18743 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 04:37:07.841111   18743 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001918 seconds
	I0729 04:37:07.841175   18743 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 04:37:07.844573   18743 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 04:37:08.353928   18743 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 04:37:08.354045   18743 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-514000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 04:37:08.859477   18743 kubeadm.go:310] [bootstrap-token] Using token: ttptur.zxqljb2zjeuj67nz
	I0729 04:37:08.865640   18743 out.go:204]   - Configuring RBAC rules ...
	I0729 04:37:08.865697   18743 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 04:37:08.865744   18743 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 04:37:08.872648   18743 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 04:37:08.873714   18743 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 04:37:08.874898   18743 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 04:37:08.875721   18743 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 04:37:08.879141   18743 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 04:37:09.059576   18743 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 04:37:09.263977   18743 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 04:37:09.264641   18743 kubeadm.go:310] 
	I0729 04:37:09.264672   18743 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 04:37:09.264675   18743 kubeadm.go:310] 
	I0729 04:37:09.264711   18743 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 04:37:09.264713   18743 kubeadm.go:310] 
	I0729 04:37:09.264729   18743 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 04:37:09.264808   18743 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 04:37:09.264861   18743 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 04:37:09.264867   18743 kubeadm.go:310] 
	I0729 04:37:09.264896   18743 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 04:37:09.264899   18743 kubeadm.go:310] 
	I0729 04:37:09.264933   18743 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 04:37:09.264936   18743 kubeadm.go:310] 
	I0729 04:37:09.264962   18743 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 04:37:09.265034   18743 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 04:37:09.265076   18743 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 04:37:09.265079   18743 kubeadm.go:310] 
	I0729 04:37:09.265135   18743 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 04:37:09.265193   18743 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 04:37:09.265204   18743 kubeadm.go:310] 
	I0729 04:37:09.265262   18743 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ttptur.zxqljb2zjeuj67nz \
	I0729 04:37:09.265316   18743 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:61250418a92f64cc21f880dcd095606f8607c1c11d80f25df99b7d542aabf8c2 \
	I0729 04:37:09.265326   18743 kubeadm.go:310] 	--control-plane 
	I0729 04:37:09.265328   18743 kubeadm.go:310] 
	I0729 04:37:09.265367   18743 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 04:37:09.265369   18743 kubeadm.go:310] 
	I0729 04:37:09.265408   18743 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ttptur.zxqljb2zjeuj67nz \
	I0729 04:37:09.265462   18743 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:61250418a92f64cc21f880dcd095606f8607c1c11d80f25df99b7d542aabf8c2 
	I0729 04:37:09.265516   18743 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 04:37:09.265524   18743 cni.go:84] Creating CNI manager for ""
	I0729 04:37:09.265532   18743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:37:09.269329   18743 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 04:37:09.277328   18743 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 04:37:09.280436   18743 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 04:37:09.285190   18743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 04:37:09.285254   18743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 04:37:09.285310   18743 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-514000 minikube.k8s.io/updated_at=2024_07_29T04_37_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b867516af467da0393bcbe7e6497c888199628ff minikube.k8s.io/name=stopped-upgrade-514000 minikube.k8s.io/primary=true
	I0729 04:37:09.288658   18743 ops.go:34] apiserver oom_adj: -16
	I0729 04:37:09.348775   18743 kubeadm.go:1113] duration metric: took 63.562334ms to wait for elevateKubeSystemPrivileges
	I0729 04:37:09.348798   18743 kubeadm.go:394] duration metric: took 4m10.963877875s to StartCluster
	I0729 04:37:09.348807   18743 settings.go:142] acquiring lock: {Name:mk7d7deaddc5161eee59fbf4fca49f66001c194c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:37:09.348888   18743 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:37:09.349311   18743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/kubeconfig: {Name:mk01c5aa9060b104010e51a5796278cdf7a7a206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:37:09.349494   18743 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:37:09.349501   18743 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 04:37:09.349574   18743 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-514000"
	I0729 04:37:09.349577   18743 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:37:09.349587   18743 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-514000"
	W0729 04:37:09.349591   18743 addons.go:243] addon storage-provisioner should already be in state true
	I0729 04:37:09.349606   18743 host.go:66] Checking if "stopped-upgrade-514000" exists ...
	I0729 04:37:09.349614   18743 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-514000"
	I0729 04:37:09.349627   18743 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-514000"
	I0729 04:37:09.353332   18743 out.go:177] * Verifying Kubernetes components...
	I0729 04:37:09.354123   18743 kapi.go:59] client config for stopped-upgrade-514000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/stopped-upgrade-514000/client.key", CAFile:"/Users/jenkins/minikube-integration/19341-15486/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060b8080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 04:37:09.357693   18743 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-514000"
	W0729 04:37:09.357697   18743 addons.go:243] addon default-storageclass should already be in state true
	I0729 04:37:09.357705   18743 host.go:66] Checking if "stopped-upgrade-514000" exists ...
	I0729 04:37:09.358215   18743 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 04:37:09.358220   18743 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 04:37:09.358225   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	I0729 04:37:09.361295   18743 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 04:37:09.365297   18743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 04:37:09.369329   18743 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:37:09.369337   18743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 04:37:09.369346   18743 sshutil.go:53] new ssh client: &{IP:localhost Port:53329 SSHKeyPath:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/stopped-upgrade-514000/id_rsa Username:docker}
	I0729 04:37:09.456086   18743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 04:37:09.461370   18743 api_server.go:52] waiting for apiserver process to appear ...
	I0729 04:37:09.461411   18743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 04:37:09.464970   18743 api_server.go:72] duration metric: took 115.468208ms to wait for apiserver process to appear ...
	I0729 04:37:09.464980   18743 api_server.go:88] waiting for apiserver healthz status ...
	I0729 04:37:09.464987   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:09.508148   18743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 04:37:09.521646   18743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 04:37:14.466397   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:14.466455   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:19.466795   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:19.466825   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:24.466922   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:24.467001   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:29.467185   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:29.467232   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:34.467624   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:34.467675   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:39.468162   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:39.468192   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0729 04:37:39.857404   18743 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0729 04:37:39.861946   18743 out.go:177] * Enabled addons: storage-provisioner
	I0729 04:37:39.869018   18743 addons.go:510] duration metric: took 30.520270708s for enable addons: enabled=[storage-provisioner]
	I0729 04:37:44.469049   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:44.469114   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:49.469974   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:49.470017   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:54.471124   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:54.471146   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:37:59.472509   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:37:59.472564   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:38:04.474639   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:38:04.474669   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:38:09.476777   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:38:09.476935   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:38:09.489547   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:38:09.489617   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:38:09.500418   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:38:09.500486   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:38:09.510966   18743 logs.go:276] 2 containers: [ffea91906a49 42f483a4e573]
	I0729 04:38:09.511032   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:38:09.521795   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:38:09.521866   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:38:09.532542   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:38:09.532619   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:38:09.542778   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:38:09.542843   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:38:09.552833   18743 logs.go:276] 0 containers: []
	W0729 04:38:09.552844   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:38:09.552900   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:38:09.562720   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:38:09.562737   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:38:09.562742   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:38:09.574620   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:38:09.574636   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:38:09.608845   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:38:09.608858   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:38:09.623993   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:38:09.624009   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:38:09.635456   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:38:09.635466   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:38:09.652097   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:38:09.652110   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:38:09.676883   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:38:09.676892   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:38:09.691041   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:38:09.691053   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:38:09.706861   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:38:09.706875   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:38:09.739945   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:38:09.739953   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:38:09.743907   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:38:09.743917   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:38:09.757956   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:38:09.757969   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:38:09.769806   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:38:09.769819   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:38:12.283575   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:38:17.285903   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:38:17.286180   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:38:17.315860   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:38:17.315969   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:38:17.332081   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:38:17.332163   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:38:17.345070   18743 logs.go:276] 2 containers: [ffea91906a49 42f483a4e573]
	I0729 04:38:17.345135   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:38:17.356197   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:38:17.356263   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:38:17.366354   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:38:17.366423   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:38:17.376780   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:38:17.376851   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:38:17.386547   18743 logs.go:276] 0 containers: []
	W0729 04:38:17.386557   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:38:17.386609   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:38:17.400282   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:38:17.400296   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:38:17.400301   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:38:17.405065   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:38:17.405074   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:38:17.440988   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:38:17.440999   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:38:17.456392   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:38:17.456402   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:38:17.470781   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:38:17.470792   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:38:17.482234   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:38:17.482242   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:38:17.494324   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:38:17.494333   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:38:17.512457   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:38:17.512468   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:38:17.548287   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:38:17.548297   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:38:17.559404   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:38:17.559414   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:38:17.571047   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:38:17.571063   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:38:17.587934   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:38:17.587944   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:38:17.612205   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:38:17.612214   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:38:20.126005   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:38:25.127633   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:38:25.127993   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:38:25.173726   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:38:25.173850   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:38:25.190829   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:38:25.190915   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:38:25.204265   18743 logs.go:276] 2 containers: [ffea91906a49 42f483a4e573]
	I0729 04:38:25.204336   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:38:25.219822   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:38:25.219887   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:38:25.230615   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:38:25.230684   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:38:25.241543   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:38:25.241608   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:38:25.252041   18743 logs.go:276] 0 containers: []
	W0729 04:38:25.252056   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:38:25.252114   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:38:25.262791   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:38:25.262803   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:38:25.262809   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:38:25.274374   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:38:25.274387   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:38:25.291980   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:38:25.291992   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:38:25.303556   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:38:25.303568   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:38:25.328745   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:38:25.328759   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:38:25.343081   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:38:25.343091   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:38:25.377699   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:38:25.377706   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:38:25.382056   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:38:25.382063   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:38:25.395793   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:38:25.395803   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:38:25.410093   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:38:25.410108   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:38:25.421539   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:38:25.421553   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:38:25.455751   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:38:25.455769   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:38:25.469679   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:38:25.469689   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:38:27.983552   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:38:32.986266   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:38:32.986713   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:38:33.026937   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:38:33.027060   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:38:33.052025   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:38:33.052128   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:38:33.067644   18743 logs.go:276] 2 containers: [ffea91906a49 42f483a4e573]
	I0729 04:38:33.067717   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:38:33.079605   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:38:33.079668   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:38:33.090899   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:38:33.090970   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:38:33.111117   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:38:33.111177   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:38:33.121035   18743 logs.go:276] 0 containers: []
	W0729 04:38:33.121045   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:38:33.121094   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:38:33.131266   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:38:33.131282   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:38:33.131287   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:38:33.150555   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:38:33.150569   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:38:33.162090   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:38:33.162100   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:38:33.173587   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:38:33.173597   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:38:33.188037   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:38:33.188050   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:38:33.211465   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:38:33.211472   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:38:33.222905   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:38:33.222917   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:38:33.237600   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:38:33.237613   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:38:33.241908   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:38:33.241916   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:38:33.280512   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:38:33.280526   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:38:33.293019   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:38:33.293029   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:38:33.310151   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:38:33.310161   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:38:33.321591   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:38:33.321602   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:38:35.858829   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:38:40.861218   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:38:40.861569   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:38:40.897336   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:38:40.897460   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:38:40.916980   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:38:40.917067   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:38:40.932032   18743 logs.go:276] 2 containers: [ffea91906a49 42f483a4e573]
	I0729 04:38:40.932102   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:38:40.944057   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:38:40.944131   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:38:40.955262   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:38:40.955332   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:38:40.965633   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:38:40.965699   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:38:40.976120   18743 logs.go:276] 0 containers: []
	W0729 04:38:40.976130   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:38:40.976178   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:38:40.991205   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:38:40.991222   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:38:40.991227   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:38:41.006474   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:38:41.006485   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:38:41.017684   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:38:41.017698   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:38:41.021840   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:38:41.021848   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:38:41.036351   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:38:41.036362   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:38:41.050070   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:38:41.050081   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:38:41.061822   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:38:41.061833   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:38:41.082718   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:38:41.082730   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:38:41.094367   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:38:41.094382   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:38:41.118546   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:38:41.118556   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:38:41.153223   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:38:41.153235   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:38:41.189931   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:38:41.189944   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:38:41.201928   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:38:41.201940   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:38:43.716081   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:38:48.718456   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:38:48.718868   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:38:48.753928   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:38:48.754053   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:38:48.777356   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:38:48.777471   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:38:48.792388   18743 logs.go:276] 2 containers: [ffea91906a49 42f483a4e573]
	I0729 04:38:48.792455   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:38:48.807543   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:38:48.807614   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:38:48.818762   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:38:48.818842   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:38:48.834157   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:38:48.834229   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:38:48.849014   18743 logs.go:276] 0 containers: []
	W0729 04:38:48.849027   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:38:48.849082   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:38:48.859933   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:38:48.859949   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:38:48.859953   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:38:48.874230   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:38:48.874239   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:38:48.885910   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:38:48.885921   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:38:48.900679   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:38:48.900690   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:38:48.912699   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:38:48.912713   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:38:48.937609   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:38:48.937616   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:38:48.949574   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:38:48.949583   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:38:48.971896   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:38:48.971908   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:38:49.006868   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:38:49.006877   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:38:49.010978   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:38:49.010986   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:38:49.044726   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:38:49.044735   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:38:49.059091   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:38:49.059101   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:38:49.071007   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:38:49.071019   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:38:51.585243   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:38:56.586757   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:38:56.587187   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:38:56.626234   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:38:56.626363   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:38:56.648648   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:38:56.648743   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:38:56.663763   18743 logs.go:276] 2 containers: [ffea91906a49 42f483a4e573]
	I0729 04:38:56.663829   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:38:56.676444   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:38:56.676514   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:38:56.687462   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:38:56.687529   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:38:56.702031   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:38:56.702090   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:38:56.712412   18743 logs.go:276] 0 containers: []
	W0729 04:38:56.712422   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:38:56.712468   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:38:56.723712   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:38:56.723728   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:38:56.723733   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:38:56.748080   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:38:56.748088   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:38:56.785043   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:38:56.785054   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:38:56.800076   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:38:56.800086   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:38:56.813194   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:38:56.813207   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:38:56.825226   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:38:56.825239   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:38:56.849126   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:38:56.849137   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:38:56.861683   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:38:56.861692   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:38:56.873702   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:38:56.873715   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:38:56.906197   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:38:56.906203   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:38:56.910087   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:38:56.910096   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:38:56.926985   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:38:56.926996   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:38:56.944748   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:38:56.944759   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:38:59.458070   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:39:04.460528   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:39:04.460882   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:39:04.509759   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:39:04.509867   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:39:04.526778   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:39:04.526868   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:39:04.540127   18743 logs.go:276] 2 containers: [ffea91906a49 42f483a4e573]
	I0729 04:39:04.540192   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:39:04.551903   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:39:04.551966   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:39:04.562964   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:39:04.563033   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:39:04.573736   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:39:04.573803   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:39:04.590132   18743 logs.go:276] 0 containers: []
	W0729 04:39:04.590145   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:39:04.590208   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:39:04.600628   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:39:04.600643   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:39:04.600649   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:39:04.622835   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:39:04.622847   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:39:04.643205   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:39:04.643219   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:39:04.655264   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:39:04.655274   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:39:04.690381   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:39:04.690394   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:39:04.704627   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:39:04.704640   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:39:04.716835   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:39:04.716849   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:39:04.728811   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:39:04.728822   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:39:04.741319   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:39:04.741330   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:39:04.752849   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:39:04.752862   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:39:04.777324   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:39:04.777333   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:39:04.811648   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:39:04.811658   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:39:04.816401   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:39:04.816408   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:39:07.332173   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:39:12.334677   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:39:12.334959   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:39:12.362515   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:39:12.362636   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:39:12.379716   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:39:12.379802   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:39:12.392950   18743 logs.go:276] 2 containers: [ffea91906a49 42f483a4e573]
	I0729 04:39:12.393027   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:39:12.404281   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:39:12.404356   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:39:12.414797   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:39:12.414864   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:39:12.425440   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:39:12.425505   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:39:12.435435   18743 logs.go:276] 0 containers: []
	W0729 04:39:12.435448   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:39:12.435500   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:39:12.445692   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:39:12.445710   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:39:12.445713   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:39:12.469030   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:39:12.469037   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:39:12.501452   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:39:12.501459   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:39:12.505532   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:39:12.505538   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:39:12.540829   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:39:12.540840   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:39:12.555811   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:39:12.555822   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:39:12.581316   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:39:12.581328   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:39:12.593092   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:39:12.593101   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:39:12.613640   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:39:12.613651   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:39:12.624826   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:39:12.624840   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:39:12.636781   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:39:12.636792   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:39:12.652014   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:39:12.652024   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:39:12.664517   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:39:12.664529   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:39:15.178292   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:39:20.179177   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:39:20.179461   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:39:20.206491   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:39:20.206609   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:39:20.224479   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:39:20.224564   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:39:20.237624   18743 logs.go:276] 2 containers: [ffea91906a49 42f483a4e573]
	I0729 04:39:20.237690   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:39:20.249275   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:39:20.249344   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:39:20.259978   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:39:20.260051   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:39:20.270496   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:39:20.270562   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:39:20.281188   18743 logs.go:276] 0 containers: []
	W0729 04:39:20.281203   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:39:20.281262   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:39:20.291654   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:39:20.291667   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:39:20.291673   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:39:20.305732   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:39:20.305743   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:39:20.321835   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:39:20.321848   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:39:20.333305   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:39:20.333317   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:39:20.347945   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:39:20.347956   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:39:20.365093   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:39:20.365104   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:39:20.389747   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:39:20.389755   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:39:20.401945   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:39:20.401958   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:39:20.436176   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:39:20.436189   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:39:20.440756   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:39:20.440763   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:39:20.454610   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:39:20.454625   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:39:20.465882   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:39:20.465895   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:39:20.477646   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:39:20.477662   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:39:23.013968   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:39:28.016714   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:39:28.017015   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:39:28.051947   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:39:28.052076   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:39:28.071615   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:39:28.071701   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:39:28.087847   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:39:28.087920   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:39:28.100396   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:39:28.100459   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:39:28.112352   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:39:28.112420   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:39:28.123102   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:39:28.123172   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:39:28.135742   18743 logs.go:276] 0 containers: []
	W0729 04:39:28.135754   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:39:28.135815   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:39:28.146373   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:39:28.146392   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:39:28.146398   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:39:28.181850   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:39:28.181864   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:39:28.203630   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:39:28.203641   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:39:28.215527   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:39:28.215542   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:39:28.249200   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:39:28.249209   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:39:28.267846   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:39:28.267860   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:39:28.280096   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:39:28.280110   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:39:28.294131   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:39:28.294144   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:39:28.312002   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:39:28.312013   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:39:28.337885   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:39:28.337895   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:39:28.348959   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:39:28.348971   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:39:28.362593   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:39:28.362602   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:39:28.374129   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:39:28.374140   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:39:28.385977   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:39:28.385990   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:39:28.398299   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:39:28.398309   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:39:30.904706   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:39:35.907253   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:39:35.907328   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:39:35.919405   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:39:35.919467   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:39:35.933009   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:39:35.933088   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:39:35.943771   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:39:35.943858   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:39:35.955548   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:39:35.955610   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:39:35.967371   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:39:35.967451   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:39:35.978777   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:39:35.978849   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:39:35.990470   18743 logs.go:276] 0 containers: []
	W0729 04:39:35.990484   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:39:35.990553   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:39:36.002034   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:39:36.002053   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:39:36.002059   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:39:36.039546   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:39:36.039560   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:39:36.054810   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:39:36.054826   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:39:36.070318   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:39:36.070329   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:39:36.084606   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:39:36.084617   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:39:36.109356   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:39:36.109377   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:39:36.123163   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:39:36.123174   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:39:36.128608   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:39:36.128620   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:39:36.147478   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:39:36.147488   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:39:36.182619   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:39:36.182630   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:39:36.194867   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:39:36.194880   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:39:36.211420   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:39:36.211431   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:39:36.225806   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:39:36.225816   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:39:36.238294   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:39:36.238306   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:39:36.253041   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:39:36.253055   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:39:38.765712   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:39:43.768475   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:39:43.768939   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:39:43.804705   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:39:43.804838   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:39:43.827324   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:39:43.827434   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:39:43.843384   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:39:43.843464   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:39:43.855499   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:39:43.855568   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:39:43.866461   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:39:43.866526   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:39:43.877430   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:39:43.877495   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:39:43.887726   18743 logs.go:276] 0 containers: []
	W0729 04:39:43.887743   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:39:43.887796   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:39:43.898570   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:39:43.898595   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:39:43.898602   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:39:43.913072   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:39:43.913084   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:39:43.925054   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:39:43.925065   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:39:43.959412   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:39:43.959422   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:39:43.963649   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:39:43.963658   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:39:43.978414   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:39:43.978424   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:39:43.993584   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:39:43.993594   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:39:44.028663   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:39:44.028677   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:39:44.040289   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:39:44.040300   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:39:44.052449   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:39:44.052460   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:39:44.064351   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:39:44.064361   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:39:44.082229   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:39:44.082242   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:39:44.094200   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:39:44.094211   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:39:44.106239   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:39:44.106252   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:39:44.130779   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:39:44.130789   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:39:46.644266   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:39:51.646874   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:39:51.647305   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:39:51.686897   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:39:51.687029   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:39:51.709491   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:39:51.709592   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:39:51.725041   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:39:51.725114   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:39:51.737516   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:39:51.737575   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:39:51.748863   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:39:51.748922   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:39:51.759158   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:39:51.759225   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:39:51.769509   18743 logs.go:276] 0 containers: []
	W0729 04:39:51.769519   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:39:51.769571   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:39:51.780231   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:39:51.780248   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:39:51.780254   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:39:51.797297   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:39:51.797309   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:39:51.833673   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:39:51.833687   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:39:51.845120   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:39:51.845132   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:39:51.856970   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:39:51.856981   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:39:51.873007   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:39:51.873019   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:39:51.907642   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:39:51.907653   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:39:51.919418   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:39:51.919429   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:39:51.944911   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:39:51.944919   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:39:51.959744   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:39:51.959758   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:39:51.975391   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:39:51.975402   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:39:51.994777   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:39:51.994795   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:39:52.010237   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:39:52.010247   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:39:52.014636   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:39:52.014643   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:39:52.026336   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:39:52.026349   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:39:54.539701   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:39:59.542355   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:39:59.542758   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:39:59.582924   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:39:59.583037   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:39:59.612908   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:39:59.612987   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:39:59.625636   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:39:59.625709   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:39:59.637077   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:39:59.637149   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:39:59.648300   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:39:59.648365   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:39:59.658669   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:39:59.658735   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:39:59.669907   18743 logs.go:276] 0 containers: []
	W0729 04:39:59.669917   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:39:59.669968   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:39:59.680523   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:39:59.680540   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:39:59.680545   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:39:59.692465   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:39:59.692477   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:39:59.706563   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:39:59.706576   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:39:59.719296   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:39:59.719310   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:39:59.734433   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:39:59.734446   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:39:59.752004   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:39:59.752014   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:39:59.785739   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:39:59.785752   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:39:59.799780   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:39:59.799793   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:39:59.811512   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:39:59.811523   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:39:59.823772   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:39:59.823781   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:39:59.835927   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:39:59.835938   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:39:59.860465   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:39:59.860473   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:39:59.896645   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:39:59.896661   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:39:59.909537   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:39:59.909548   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:39:59.922439   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:39:59.922452   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:40:02.429918   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:40:07.432225   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:40:07.432657   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:40:07.471156   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:40:07.471285   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:40:07.494300   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:40:07.494407   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:40:07.510254   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:40:07.510328   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:40:07.525895   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:40:07.525962   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:40:07.539835   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:40:07.539897   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:40:07.550666   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:40:07.550738   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:40:07.560968   18743 logs.go:276] 0 containers: []
	W0729 04:40:07.560980   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:40:07.561033   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:40:07.571359   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:40:07.571375   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:40:07.571380   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:40:07.582721   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:40:07.582735   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:40:07.594159   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:40:07.594171   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:40:07.615150   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:40:07.615160   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:40:07.628053   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:40:07.628066   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:40:07.662940   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:40:07.662951   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:40:07.677883   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:40:07.677896   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:40:07.689323   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:40:07.689335   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:40:07.710657   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:40:07.710671   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:40:07.732544   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:40:07.732554   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:40:07.752338   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:40:07.752349   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:40:07.756474   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:40:07.756482   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:40:07.791175   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:40:07.791189   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:40:07.804977   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:40:07.804989   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:40:07.817024   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:40:07.817038   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:40:10.344703   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:40:15.346400   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:40:15.346577   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:40:15.363396   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:40:15.363476   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:40:15.376013   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:40:15.376081   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:40:15.387120   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:40:15.387188   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:40:15.397794   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:40:15.397859   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:40:15.408291   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:40:15.408360   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:40:15.418470   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:40:15.418536   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:40:15.428972   18743 logs.go:276] 0 containers: []
	W0729 04:40:15.428986   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:40:15.429043   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:40:15.439582   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:40:15.439622   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:40:15.439627   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:40:15.478255   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:40:15.478267   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:40:15.492378   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:40:15.492391   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:40:15.506611   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:40:15.506623   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:40:15.518390   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:40:15.518403   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:40:15.552116   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:40:15.552124   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:40:15.565592   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:40:15.565605   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:40:15.577466   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:40:15.577478   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:40:15.589455   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:40:15.589467   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:40:15.608366   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:40:15.608380   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:40:15.620042   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:40:15.620054   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:40:15.637429   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:40:15.637442   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:40:15.649158   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:40:15.649170   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:40:15.674761   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:40:15.674768   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:40:15.686299   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:40:15.686312   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:40:18.193387   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:40:23.196084   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:40:23.196364   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:40:23.224792   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:40:23.224914   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:40:23.243337   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:40:23.243430   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:40:23.257106   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:40:23.257179   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:40:23.268592   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:40:23.268661   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:40:23.284405   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:40:23.284474   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:40:23.294384   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:40:23.294459   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:40:23.304648   18743 logs.go:276] 0 containers: []
	W0729 04:40:23.304658   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:40:23.304715   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:40:23.315063   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:40:23.315081   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:40:23.315086   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:40:23.329554   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:40:23.329568   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:40:23.354537   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:40:23.354548   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:40:23.380386   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:40:23.380395   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:40:23.384450   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:40:23.384456   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:40:23.396221   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:40:23.396233   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:40:23.416325   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:40:23.416338   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:40:23.427523   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:40:23.427537   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:40:23.460156   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:40:23.460166   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:40:23.471946   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:40:23.471958   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:40:23.483593   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:40:23.483604   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:40:23.503158   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:40:23.503167   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:40:23.542122   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:40:23.542131   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:40:23.556668   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:40:23.556679   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:40:23.570353   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:40:23.570363   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:40:26.082845   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:40:31.085914   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:40:31.086357   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:40:31.128367   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:40:31.128498   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:40:31.156699   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:40:31.156790   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:40:31.171750   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:40:31.171830   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:40:31.185613   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:40:31.185681   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:40:31.196048   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:40:31.196112   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:40:31.206712   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:40:31.206780   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:40:31.217646   18743 logs.go:276] 0 containers: []
	W0729 04:40:31.217656   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:40:31.217711   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:40:31.227905   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:40:31.227923   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:40:31.227929   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:40:31.242467   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:40:31.242479   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:40:31.265766   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:40:31.265776   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:40:31.299136   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:40:31.299150   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:40:31.313687   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:40:31.313702   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:40:31.326676   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:40:31.326690   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:40:31.346788   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:40:31.346801   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:40:31.359403   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:40:31.359414   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:40:31.392318   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:40:31.392325   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:40:31.404149   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:40:31.404159   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:40:31.415857   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:40:31.415868   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:40:31.420436   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:40:31.420446   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:40:31.434625   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:40:31.434635   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:40:31.446777   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:40:31.446787   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:40:31.458434   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:40:31.458446   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:40:33.976219   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:40:38.978898   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:40:38.979066   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:40:38.996273   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:40:38.996359   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:40:39.011254   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:40:39.011317   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:40:39.022478   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:40:39.022538   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:40:39.032719   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:40:39.032785   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:40:39.043152   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:40:39.043218   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:40:39.053702   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:40:39.053765   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:40:39.063728   18743 logs.go:276] 0 containers: []
	W0729 04:40:39.063740   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:40:39.063801   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:40:39.074280   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:40:39.074297   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:40:39.074302   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:40:39.085306   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:40:39.085316   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:40:39.097093   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:40:39.097106   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:40:39.108203   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:40:39.108213   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:40:39.119875   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:40:39.119887   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:40:39.124687   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:40:39.124695   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:40:39.165421   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:40:39.165433   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:40:39.183071   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:40:39.183084   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:40:39.197472   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:40:39.197483   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:40:39.212142   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:40:39.212154   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:40:39.223334   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:40:39.223347   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:40:39.247889   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:40:39.247900   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:40:39.259333   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:40:39.259347   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:40:39.271239   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:40:39.271256   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:40:39.306395   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:40:39.306403   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:40:41.822273   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:40:46.825378   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:40:46.825823   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:40:46.861974   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:40:46.862081   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:40:46.881289   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:40:46.881386   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:40:46.896934   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:40:46.897005   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:40:46.912548   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:40:46.912617   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:40:46.923166   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:40:46.923228   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:40:46.939064   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:40:46.939153   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:40:46.958522   18743 logs.go:276] 0 containers: []
	W0729 04:40:46.958535   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:40:46.958584   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:40:46.974250   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:40:46.974268   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:40:46.974273   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:40:47.008253   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:40:47.008263   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:40:47.012514   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:40:47.012520   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:40:47.026769   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:40:47.026781   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:40:47.038646   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:40:47.038659   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:40:47.053387   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:40:47.053398   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:40:47.067398   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:40:47.067410   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:40:47.083511   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:40:47.083525   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:40:47.101378   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:40:47.101389   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:40:47.113007   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:40:47.113019   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:40:47.124335   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:40:47.124347   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:40:47.166613   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:40:47.166626   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:40:47.178526   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:40:47.178539   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:40:47.196243   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:40:47.196256   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:40:47.208499   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:40:47.208512   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:40:49.734420   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:40:54.736828   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:40:54.737379   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:40:54.778048   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:40:54.778165   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:40:54.799533   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:40:54.799651   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:40:54.815330   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:40:54.815414   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:40:54.827824   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:40:54.827889   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:40:54.838504   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:40:54.838575   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:40:54.848874   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:40:54.848929   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:40:54.858540   18743 logs.go:276] 0 containers: []
	W0729 04:40:54.858552   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:40:54.858606   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:40:54.869284   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:40:54.869300   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:40:54.869307   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:40:54.903831   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:40:54.903843   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:40:54.915382   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:40:54.915394   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:40:54.927654   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:40:54.927668   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:40:54.939906   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:40:54.939916   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:40:54.951532   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:40:54.951544   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:40:54.966688   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:40:54.966698   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:40:54.983993   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:40:54.984005   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:40:54.988821   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:40:54.988827   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:40:55.003586   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:40:55.003595   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:40:55.017799   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:40:55.017810   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:40:55.029251   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:40:55.029259   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:40:55.054945   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:40:55.054954   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:40:55.089615   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:40:55.089626   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:40:55.104504   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:40:55.104515   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:40:57.619476   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:41:02.622163   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:41:02.622256   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0729 04:41:02.638621   18743 logs.go:276] 1 containers: [5d3c2e3a2e24]
	I0729 04:41:02.638683   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0729 04:41:02.650376   18743 logs.go:276] 1 containers: [b9c15d8283d6]
	I0729 04:41:02.650440   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0729 04:41:02.662640   18743 logs.go:276] 4 containers: [faf8bb4682e8 aeb3e4298641 ffea91906a49 42f483a4e573]
	I0729 04:41:02.662694   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0729 04:41:02.674403   18743 logs.go:276] 1 containers: [d9cf94f70dec]
	I0729 04:41:02.674459   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0729 04:41:02.689856   18743 logs.go:276] 1 containers: [76f181e043d0]
	I0729 04:41:02.689917   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0729 04:41:02.700272   18743 logs.go:276] 1 containers: [db3fd2a7663d]
	I0729 04:41:02.700323   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0729 04:41:02.710227   18743 logs.go:276] 0 containers: []
	W0729 04:41:02.710236   18743 logs.go:278] No container was found matching "kindnet"
	I0729 04:41:02.710284   18743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0729 04:41:02.721083   18743 logs.go:276] 1 containers: [732896f98749]
	I0729 04:41:02.721100   18743 logs.go:123] Gathering logs for container status ...
	I0729 04:41:02.721107   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 04:41:02.734383   18743 logs.go:123] Gathering logs for dmesg ...
	I0729 04:41:02.734392   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 04:41:02.738900   18743 logs.go:123] Gathering logs for coredns [aeb3e4298641] ...
	I0729 04:41:02.738906   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aeb3e4298641"
	I0729 04:41:02.750920   18743 logs.go:123] Gathering logs for kube-proxy [76f181e043d0] ...
	I0729 04:41:02.750932   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76f181e043d0"
	I0729 04:41:02.762663   18743 logs.go:123] Gathering logs for kube-controller-manager [db3fd2a7663d] ...
	I0729 04:41:02.762674   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db3fd2a7663d"
	I0729 04:41:02.780855   18743 logs.go:123] Gathering logs for coredns [42f483a4e573] ...
	I0729 04:41:02.780866   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42f483a4e573"
	I0729 04:41:02.793834   18743 logs.go:123] Gathering logs for kube-apiserver [5d3c2e3a2e24] ...
	I0729 04:41:02.793847   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d3c2e3a2e24"
	I0729 04:41:02.809575   18743 logs.go:123] Gathering logs for etcd [b9c15d8283d6] ...
	I0729 04:41:02.809588   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9c15d8283d6"
	I0729 04:41:02.831124   18743 logs.go:123] Gathering logs for coredns [faf8bb4682e8] ...
	I0729 04:41:02.831141   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf8bb4682e8"
	I0729 04:41:02.850170   18743 logs.go:123] Gathering logs for coredns [ffea91906a49] ...
	I0729 04:41:02.850182   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffea91906a49"
	I0729 04:41:02.864365   18743 logs.go:123] Gathering logs for kubelet ...
	I0729 04:41:02.864378   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 04:41:02.901903   18743 logs.go:123] Gathering logs for kube-scheduler [d9cf94f70dec] ...
	I0729 04:41:02.901919   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9cf94f70dec"
	I0729 04:41:02.918975   18743 logs.go:123] Gathering logs for describe nodes ...
	I0729 04:41:02.918992   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 04:41:02.964402   18743 logs.go:123] Gathering logs for storage-provisioner [732896f98749] ...
	I0729 04:41:02.964416   18743 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 732896f98749"
	I0729 04:41:02.976752   18743 logs.go:123] Gathering logs for Docker ...
	I0729 04:41:02.976763   18743 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0729 04:41:05.503926   18743 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0729 04:41:10.506696   18743 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0729 04:41:10.516203   18743 out.go:177] 
	W0729 04:41:10.519220   18743 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0729 04:41:10.519233   18743 out.go:239] * 
	* 
	W0729 04:41:10.520218   18743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:41:10.534098   18743 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-514000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.38s)

                                                
                                    
x
+
TestPause/serial/Start (10s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-334000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-334000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.950032208s)

                                                
                                                
-- stdout --
	* [pause-334000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-334000" primary control-plane node in "pause-334000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-334000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-334000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-334000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-334000 -n pause-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-334000 -n pause-334000: exit status 7 (47.620333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-334000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-257000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-257000 --driver=qemu2 : exit status 80 (9.718159333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-257000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-257000" primary control-plane node in "NoKubernetes-257000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-257000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-257000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-257000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-257000 -n NoKubernetes-257000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-257000 -n NoKubernetes-257000: exit status 7 (30.890417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-257000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-257000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-257000 --no-kubernetes --driver=qemu2 : exit status 80 (5.23527925s)

                                                
                                                
-- stdout --
	* [NoKubernetes-257000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-257000
	* Restarting existing qemu2 VM for "NoKubernetes-257000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-257000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-257000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-257000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-257000 -n NoKubernetes-257000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-257000 -n NoKubernetes-257000: exit status 7 (55.334667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-257000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-257000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-257000 --no-kubernetes --driver=qemu2 : exit status 80 (5.232157958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-257000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-257000
	* Restarting existing qemu2 VM for "NoKubernetes-257000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-257000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-257000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-257000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-257000 -n NoKubernetes-257000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-257000 -n NoKubernetes-257000: exit status 7 (55.871959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-257000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-257000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-257000 --driver=qemu2 : exit status 80 (5.285639666s)

                                                
                                                
-- stdout --
	* [NoKubernetes-257000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-257000
	* Restarting existing qemu2 VM for "NoKubernetes-257000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-257000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-257000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-257000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-257000 -n NoKubernetes-257000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-257000 -n NoKubernetes-257000: exit status 7 (56.993959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-257000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.821121459s)

                                                
                                                
-- stdout --
	* [auto-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-159000" primary control-plane node in "auto-159000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:39:13.594527   19245 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:39:13.594646   19245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:39:13.594652   19245 out.go:304] Setting ErrFile to fd 2...
	I0729 04:39:13.594654   19245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:39:13.594771   19245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:39:13.595838   19245 out.go:298] Setting JSON to false
	I0729 04:39:13.612281   19245 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9522,"bootTime":1722243631,"procs":494,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:39:13.612358   19245 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:39:13.618748   19245 out.go:177] * [auto-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:39:13.625931   19245 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:39:13.626009   19245 notify.go:220] Checking for updates...
	I0729 04:39:13.632890   19245 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:39:13.636946   19245 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:39:13.640919   19245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:39:13.643951   19245 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:39:13.646950   19245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:39:13.650195   19245 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:39:13.650265   19245 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:39:13.650317   19245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:39:13.654920   19245 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:39:13.661827   19245 start.go:297] selected driver: qemu2
	I0729 04:39:13.661833   19245 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:39:13.661838   19245 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:39:13.664159   19245 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:39:13.667860   19245 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:39:13.670972   19245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:39:13.670997   19245 cni.go:84] Creating CNI manager for ""
	I0729 04:39:13.671005   19245 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:39:13.671011   19245 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:39:13.671039   19245 start.go:340] cluster config:
	{Name:auto-159000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:39:13.674496   19245 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:39:13.681893   19245 out.go:177] * Starting "auto-159000" primary control-plane node in "auto-159000" cluster
	I0729 04:39:13.686821   19245 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:39:13.686846   19245 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:39:13.686861   19245 cache.go:56] Caching tarball of preloaded images
	I0729 04:39:13.686934   19245 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:39:13.686943   19245 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:39:13.687000   19245 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/auto-159000/config.json ...
	I0729 04:39:13.687011   19245 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/auto-159000/config.json: {Name:mk010ee08122bfbc55885f878108c95445b41576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:39:13.687210   19245 start.go:360] acquireMachinesLock for auto-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:39:13.687240   19245 start.go:364] duration metric: took 25.209µs to acquireMachinesLock for "auto-159000"
	I0729 04:39:13.687251   19245 start.go:93] Provisioning new machine with config: &{Name:auto-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:39:13.687296   19245 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:39:13.693010   19245 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:39:13.708485   19245 start.go:159] libmachine.API.Create for "auto-159000" (driver="qemu2")
	I0729 04:39:13.708510   19245 client.go:168] LocalClient.Create starting
	I0729 04:39:13.708577   19245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:39:13.708609   19245 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:13.708621   19245 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:13.708661   19245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:39:13.708683   19245 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:13.708696   19245 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:13.709062   19245 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:39:13.857219   19245 main.go:141] libmachine: Creating SSH key...
	I0729 04:39:14.031767   19245 main.go:141] libmachine: Creating Disk image...
	I0729 04:39:14.031778   19245 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:39:14.032038   19245 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2
	I0729 04:39:14.041661   19245 main.go:141] libmachine: STDOUT: 
	I0729 04:39:14.041686   19245 main.go:141] libmachine: STDERR: 
	I0729 04:39:14.041735   19245 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2 +20000M
	I0729 04:39:14.049630   19245 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:39:14.049646   19245 main.go:141] libmachine: STDERR: 
	I0729 04:39:14.049656   19245 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2
	I0729 04:39:14.049661   19245 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:39:14.049683   19245 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:39:14.049710   19245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:12:e3:cb:58:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2
	I0729 04:39:14.051273   19245 main.go:141] libmachine: STDOUT: 
	I0729 04:39:14.051288   19245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:39:14.051306   19245 client.go:171] duration metric: took 342.798958ms to LocalClient.Create
	I0729 04:39:16.053459   19245 start.go:128] duration metric: took 2.366196291s to createHost
	I0729 04:39:16.053583   19245 start.go:83] releasing machines lock for "auto-159000", held for 2.366391292s
	W0729 04:39:16.053668   19245 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:16.060056   19245 out.go:177] * Deleting "auto-159000" in qemu2 ...
	W0729 04:39:16.086314   19245 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:16.086343   19245 start.go:729] Will try again in 5 seconds ...
	I0729 04:39:21.088381   19245 start.go:360] acquireMachinesLock for auto-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:39:21.088679   19245 start.go:364] duration metric: took 246.208µs to acquireMachinesLock for "auto-159000"
	I0729 04:39:21.088745   19245 start.go:93] Provisioning new machine with config: &{Name:auto-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:39:21.088897   19245 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:39:21.095361   19245 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:39:21.129733   19245 start.go:159] libmachine.API.Create for "auto-159000" (driver="qemu2")
	I0729 04:39:21.129780   19245 client.go:168] LocalClient.Create starting
	I0729 04:39:21.129893   19245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:39:21.129954   19245 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:21.129968   19245 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:21.130029   19245 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:39:21.130068   19245 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:21.130079   19245 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:21.130631   19245 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:39:21.300991   19245 main.go:141] libmachine: Creating SSH key...
	I0729 04:39:21.330606   19245 main.go:141] libmachine: Creating Disk image...
	I0729 04:39:21.330612   19245 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:39:21.330830   19245 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2
	I0729 04:39:21.340761   19245 main.go:141] libmachine: STDOUT: 
	I0729 04:39:21.340790   19245 main.go:141] libmachine: STDERR: 
	I0729 04:39:21.340844   19245 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2 +20000M
	I0729 04:39:21.348949   19245 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:39:21.349014   19245 main.go:141] libmachine: STDERR: 
	I0729 04:39:21.349025   19245 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2
	I0729 04:39:21.349030   19245 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:39:21.349042   19245 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:39:21.349074   19245 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:61:fb:dc:4e:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/auto-159000/disk.qcow2
	I0729 04:39:21.350839   19245 main.go:141] libmachine: STDOUT: 
	I0729 04:39:21.350856   19245 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:39:21.350868   19245 client.go:171] duration metric: took 221.088ms to LocalClient.Create
	I0729 04:39:23.352923   19245 start.go:128] duration metric: took 2.264060958s to createHost
	I0729 04:39:23.352992   19245 start.go:83] releasing machines lock for "auto-159000", held for 2.264343542s
	W0729 04:39:23.353157   19245 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:23.363444   19245 out.go:177] 
	W0729 04:39:23.367591   19245 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:39:23.367610   19245 out.go:239] * 
	* 
	W0729 04:39:23.368397   19245 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:39:23.380592   19245 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.888476709s)

                                                
                                                
-- stdout --
	* [kindnet-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-159000" primary control-plane node in "kindnet-159000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:39:25.483560   19356 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:39:25.483688   19356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:39:25.483691   19356 out.go:304] Setting ErrFile to fd 2...
	I0729 04:39:25.483694   19356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:39:25.483834   19356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:39:25.484979   19356 out.go:298] Setting JSON to false
	I0729 04:39:25.501349   19356 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9534,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:39:25.501415   19356 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:39:25.510399   19356 out.go:177] * [kindnet-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:39:25.516178   19356 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:39:25.516249   19356 notify.go:220] Checking for updates...
	I0729 04:39:25.522467   19356 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:39:25.523905   19356 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:39:25.527490   19356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:39:25.530475   19356 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:39:25.533481   19356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:39:25.536902   19356 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:39:25.536970   19356 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:39:25.537020   19356 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:39:25.540448   19356 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:39:25.547400   19356 start.go:297] selected driver: qemu2
	I0729 04:39:25.547406   19356 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:39:25.547412   19356 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:39:25.549647   19356 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:39:25.552671   19356 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:39:25.555556   19356 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:39:25.555572   19356 cni.go:84] Creating CNI manager for "kindnet"
	I0729 04:39:25.555579   19356 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 04:39:25.555614   19356 start.go:340] cluster config:
	{Name:kindnet-159000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:39:25.559299   19356 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:39:25.567495   19356 out.go:177] * Starting "kindnet-159000" primary control-plane node in "kindnet-159000" cluster
	I0729 04:39:25.571452   19356 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:39:25.571468   19356 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:39:25.571481   19356 cache.go:56] Caching tarball of preloaded images
	I0729 04:39:25.571560   19356 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:39:25.571573   19356 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:39:25.571633   19356 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/kindnet-159000/config.json ...
	I0729 04:39:25.571647   19356 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/kindnet-159000/config.json: {Name:mk230de4aa00738ad9fe40c5df3452a93e8e311e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:39:25.571876   19356 start.go:360] acquireMachinesLock for kindnet-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:39:25.571910   19356 start.go:364] duration metric: took 27.916µs to acquireMachinesLock for "kindnet-159000"
	I0729 04:39:25.571923   19356 start.go:93] Provisioning new machine with config: &{Name:kindnet-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:39:25.571954   19356 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:39:25.579362   19356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:39:25.597197   19356 start.go:159] libmachine.API.Create for "kindnet-159000" (driver="qemu2")
	I0729 04:39:25.597231   19356 client.go:168] LocalClient.Create starting
	I0729 04:39:25.597309   19356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:39:25.597343   19356 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:25.597352   19356 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:25.597388   19356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:39:25.597412   19356 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:25.597422   19356 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:25.597780   19356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:39:25.746855   19356 main.go:141] libmachine: Creating SSH key...
	I0729 04:39:25.925877   19356 main.go:141] libmachine: Creating Disk image...
	I0729 04:39:25.925886   19356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:39:25.926118   19356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2
	I0729 04:39:25.935620   19356 main.go:141] libmachine: STDOUT: 
	I0729 04:39:25.935639   19356 main.go:141] libmachine: STDERR: 
	I0729 04:39:25.935689   19356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2 +20000M
	I0729 04:39:25.943707   19356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:39:25.943727   19356 main.go:141] libmachine: STDERR: 
	I0729 04:39:25.943740   19356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2
	I0729 04:39:25.943744   19356 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:39:25.943754   19356 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:39:25.943780   19356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:59:ce:b9:e1:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2
	I0729 04:39:25.945420   19356 main.go:141] libmachine: STDOUT: 
	I0729 04:39:25.945436   19356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:39:25.945455   19356 client.go:171] duration metric: took 348.226417ms to LocalClient.Create
	I0729 04:39:27.947625   19356 start.go:128] duration metric: took 2.375694s to createHost
	I0729 04:39:27.947697   19356 start.go:83] releasing machines lock for "kindnet-159000", held for 2.375834542s
	W0729 04:39:27.947776   19356 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:27.955071   19356 out.go:177] * Deleting "kindnet-159000" in qemu2 ...
	W0729 04:39:27.981402   19356 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:27.981443   19356 start.go:729] Will try again in 5 seconds ...
	I0729 04:39:32.983567   19356 start.go:360] acquireMachinesLock for kindnet-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:39:32.984079   19356 start.go:364] duration metric: took 426.792µs to acquireMachinesLock for "kindnet-159000"
	I0729 04:39:32.984210   19356 start.go:93] Provisioning new machine with config: &{Name:kindnet-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:39:32.984452   19356 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:39:32.993958   19356 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:39:33.042259   19356 start.go:159] libmachine.API.Create for "kindnet-159000" (driver="qemu2")
	I0729 04:39:33.042313   19356 client.go:168] LocalClient.Create starting
	I0729 04:39:33.042438   19356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:39:33.042507   19356 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:33.042527   19356 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:33.042594   19356 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:39:33.042639   19356 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:33.042654   19356 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:33.043201   19356 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:39:33.202679   19356 main.go:141] libmachine: Creating SSH key...
	I0729 04:39:33.283542   19356 main.go:141] libmachine: Creating Disk image...
	I0729 04:39:33.283549   19356 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:39:33.283769   19356 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2
	I0729 04:39:33.293273   19356 main.go:141] libmachine: STDOUT: 
	I0729 04:39:33.293292   19356 main.go:141] libmachine: STDERR: 
	I0729 04:39:33.293355   19356 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2 +20000M
	I0729 04:39:33.301231   19356 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:39:33.301253   19356 main.go:141] libmachine: STDERR: 
	I0729 04:39:33.301263   19356 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2
	I0729 04:39:33.301269   19356 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:39:33.301280   19356 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:39:33.301304   19356 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:aa:e0:bd:cf:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kindnet-159000/disk.qcow2
	I0729 04:39:33.303267   19356 main.go:141] libmachine: STDOUT: 
	I0729 04:39:33.303284   19356 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:39:33.303296   19356 client.go:171] duration metric: took 260.983417ms to LocalClient.Create
	I0729 04:39:35.305468   19356 start.go:128] duration metric: took 2.32103125s to createHost
	I0729 04:39:35.305536   19356 start.go:83] releasing machines lock for "kindnet-159000", held for 2.321491291s
	W0729 04:39:35.305958   19356 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:35.318618   19356 out.go:177] 
	W0729 04:39:35.321962   19356 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:39:35.322007   19356 out.go:239] * 
	* 
	W0729 04:39:35.324653   19356 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:39:35.332094   19356 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.9089825s)

                                                
                                                
-- stdout --
	* [calico-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-159000" primary control-plane node in "calico-159000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:39:37.595534   19473 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:39:37.595666   19473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:39:37.595669   19473 out.go:304] Setting ErrFile to fd 2...
	I0729 04:39:37.595682   19473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:39:37.595841   19473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:39:37.596949   19473 out.go:298] Setting JSON to false
	I0729 04:39:37.613935   19473 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9546,"bootTime":1722243631,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:39:37.613997   19473 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:39:37.618951   19473 out.go:177] * [calico-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:39:37.625797   19473 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:39:37.625855   19473 notify.go:220] Checking for updates...
	I0729 04:39:37.632834   19473 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:39:37.636807   19473 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:39:37.639840   19473 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:39:37.642832   19473 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:39:37.645843   19473 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:39:37.649088   19473 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:39:37.649160   19473 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:39:37.649217   19473 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:39:37.651772   19473 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:39:37.658800   19473 start.go:297] selected driver: qemu2
	I0729 04:39:37.658807   19473 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:39:37.658813   19473 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:39:37.661168   19473 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:39:37.664800   19473 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:39:37.666225   19473 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:39:37.666256   19473 cni.go:84] Creating CNI manager for "calico"
	I0729 04:39:37.666261   19473 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 04:39:37.666289   19473 start.go:340] cluster config:
	{Name:calico-159000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:39:37.669846   19473 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:39:37.677797   19473 out.go:177] * Starting "calico-159000" primary control-plane node in "calico-159000" cluster
	I0729 04:39:37.681826   19473 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:39:37.681842   19473 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:39:37.681852   19473 cache.go:56] Caching tarball of preloaded images
	I0729 04:39:37.681913   19473 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:39:37.681918   19473 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:39:37.681983   19473 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/calico-159000/config.json ...
	I0729 04:39:37.681996   19473 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/calico-159000/config.json: {Name:mk52b4290574d22799241e7b2df35cc12232a7b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:39:37.682363   19473 start.go:360] acquireMachinesLock for calico-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:39:37.682395   19473 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "calico-159000"
	I0729 04:39:37.682407   19473 start.go:93] Provisioning new machine with config: &{Name:calico-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:39:37.682438   19473 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:39:37.690774   19473 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:39:37.707661   19473 start.go:159] libmachine.API.Create for "calico-159000" (driver="qemu2")
	I0729 04:39:37.707691   19473 client.go:168] LocalClient.Create starting
	I0729 04:39:37.707753   19473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:39:37.707785   19473 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:37.707793   19473 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:37.707829   19473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:39:37.707850   19473 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:37.707860   19473 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:37.708275   19473 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:39:37.856907   19473 main.go:141] libmachine: Creating SSH key...
	I0729 04:39:37.978956   19473 main.go:141] libmachine: Creating Disk image...
	I0729 04:39:37.978963   19473 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:39:37.979168   19473 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2
	I0729 04:39:37.988379   19473 main.go:141] libmachine: STDOUT: 
	I0729 04:39:37.988398   19473 main.go:141] libmachine: STDERR: 
	I0729 04:39:37.988459   19473 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2 +20000M
	I0729 04:39:37.996329   19473 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:39:37.996346   19473 main.go:141] libmachine: STDERR: 
	I0729 04:39:37.996370   19473 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2
	I0729 04:39:37.996375   19473 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:39:37.996389   19473 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:39:37.996418   19473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:65:7c:a6:fa:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2
	I0729 04:39:37.998123   19473 main.go:141] libmachine: STDOUT: 
	I0729 04:39:37.998138   19473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:39:37.998157   19473 client.go:171] duration metric: took 290.467792ms to LocalClient.Create
	I0729 04:39:40.000368   19473 start.go:128] duration metric: took 2.317958584s to createHost
	I0729 04:39:40.000471   19473 start.go:83] releasing machines lock for "calico-159000", held for 2.31812225s
	W0729 04:39:40.000522   19473 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:40.011533   19473 out.go:177] * Deleting "calico-159000" in qemu2 ...
	W0729 04:39:40.047360   19473 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:40.047393   19473 start.go:729] Will try again in 5 seconds ...
	I0729 04:39:45.049537   19473 start.go:360] acquireMachinesLock for calico-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:39:45.050071   19473 start.go:364] duration metric: took 429.417µs to acquireMachinesLock for "calico-159000"
	I0729 04:39:45.050181   19473 start.go:93] Provisioning new machine with config: &{Name:calico-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:39:45.050364   19473 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:39:45.059858   19473 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:39:45.107623   19473 start.go:159] libmachine.API.Create for "calico-159000" (driver="qemu2")
	I0729 04:39:45.107687   19473 client.go:168] LocalClient.Create starting
	I0729 04:39:45.107822   19473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:39:45.107891   19473 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:45.107909   19473 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:45.108005   19473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:39:45.108068   19473 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:45.108085   19473 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:45.108777   19473 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:39:45.265855   19473 main.go:141] libmachine: Creating SSH key...
	I0729 04:39:45.415259   19473 main.go:141] libmachine: Creating Disk image...
	I0729 04:39:45.415272   19473 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:39:45.415532   19473 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2
	I0729 04:39:45.425533   19473 main.go:141] libmachine: STDOUT: 
	I0729 04:39:45.425552   19473 main.go:141] libmachine: STDERR: 
	I0729 04:39:45.425604   19473 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2 +20000M
	I0729 04:39:45.433753   19473 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:39:45.433767   19473 main.go:141] libmachine: STDERR: 
	I0729 04:39:45.433779   19473 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2
	I0729 04:39:45.433784   19473 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:39:45.433798   19473 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:39:45.433826   19473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:16:2b:35:e2:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/calico-159000/disk.qcow2
	I0729 04:39:45.435496   19473 main.go:141] libmachine: STDOUT: 
	I0729 04:39:45.435511   19473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:39:45.435529   19473 client.go:171] duration metric: took 327.843708ms to LocalClient.Create
	I0729 04:39:47.437656   19473 start.go:128] duration metric: took 2.387322375s to createHost
	I0729 04:39:47.437770   19473 start.go:83] releasing machines lock for "calico-159000", held for 2.387737583s
	W0729 04:39:47.438116   19473 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:47.447753   19473 out.go:177] 
	W0729 04:39:47.450669   19473 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:39:47.450722   19473 out.go:239] * 
	* 
	W0729 04:39:47.452358   19473 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:39:47.462747   19473 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (10.028472333s)

                                                
                                                
-- stdout --
	* [custom-flannel-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-159000" primary control-plane node in "custom-flannel-159000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:39:49.865211   19596 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:39:49.865326   19596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:39:49.865330   19596 out.go:304] Setting ErrFile to fd 2...
	I0729 04:39:49.865332   19596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:39:49.865449   19596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:39:49.866518   19596 out.go:298] Setting JSON to false
	I0729 04:39:49.882957   19596 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9558,"bootTime":1722243631,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:39:49.883026   19596 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:39:49.889035   19596 out.go:177] * [custom-flannel-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:39:49.895065   19596 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:39:49.895125   19596 notify.go:220] Checking for updates...
	I0729 04:39:49.901999   19596 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:39:49.905032   19596 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:39:49.908058   19596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:39:49.911004   19596 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:39:49.914024   19596 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:39:49.917314   19596 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:39:49.917380   19596 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:39:49.917447   19596 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:39:49.919856   19596 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:39:49.926944   19596 start.go:297] selected driver: qemu2
	I0729 04:39:49.926952   19596 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:39:49.926958   19596 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:39:49.929491   19596 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:39:49.930744   19596 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:39:49.935033   19596 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:39:49.935048   19596 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 04:39:49.935056   19596 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 04:39:49.935085   19596 start.go:340] cluster config:
	{Name:custom-flannel-159000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:39:49.938604   19596 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:39:49.946985   19596 out.go:177] * Starting "custom-flannel-159000" primary control-plane node in "custom-flannel-159000" cluster
	I0729 04:39:49.950990   19596 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:39:49.951006   19596 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:39:49.951016   19596 cache.go:56] Caching tarball of preloaded images
	I0729 04:39:49.951078   19596 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:39:49.951083   19596 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:39:49.951150   19596 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/custom-flannel-159000/config.json ...
	I0729 04:39:49.951161   19596 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/custom-flannel-159000/config.json: {Name:mk68d196bf284c7e288f3b5efee186de269eda62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:39:49.951370   19596 start.go:360] acquireMachinesLock for custom-flannel-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:39:49.951400   19596 start.go:364] duration metric: took 24.25µs to acquireMachinesLock for "custom-flannel-159000"
	I0729 04:39:49.951412   19596 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:39:49.951444   19596 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:39:49.960036   19596 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:39:49.975053   19596 start.go:159] libmachine.API.Create for "custom-flannel-159000" (driver="qemu2")
	I0729 04:39:49.975076   19596 client.go:168] LocalClient.Create starting
	I0729 04:39:49.975135   19596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:39:49.975166   19596 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:49.975176   19596 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:49.975215   19596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:39:49.975241   19596 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:49.975248   19596 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:49.975590   19596 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:39:50.127210   19596 main.go:141] libmachine: Creating SSH key...
	I0729 04:39:50.218686   19596 main.go:141] libmachine: Creating Disk image...
	I0729 04:39:50.218693   19596 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:39:50.218923   19596 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2
	I0729 04:39:50.228011   19596 main.go:141] libmachine: STDOUT: 
	I0729 04:39:50.228031   19596 main.go:141] libmachine: STDERR: 
	I0729 04:39:50.228082   19596 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2 +20000M
	I0729 04:39:50.236211   19596 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:39:50.236228   19596 main.go:141] libmachine: STDERR: 
	I0729 04:39:50.236246   19596 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2
	I0729 04:39:50.236250   19596 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:39:50.236262   19596 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:39:50.236295   19596 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:68:ed:50:47:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2
	I0729 04:39:50.237927   19596 main.go:141] libmachine: STDOUT: 
	I0729 04:39:50.237943   19596 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:39:50.237962   19596 client.go:171] duration metric: took 262.888667ms to LocalClient.Create
	I0729 04:39:52.240000   19596 start.go:128] duration metric: took 2.288602459s to createHost
	I0729 04:39:52.240037   19596 start.go:83] releasing machines lock for "custom-flannel-159000", held for 2.288681083s
	W0729 04:39:52.240085   19596 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:52.249729   19596 out.go:177] * Deleting "custom-flannel-159000" in qemu2 ...
	W0729 04:39:52.264788   19596 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:52.264807   19596 start.go:729] Will try again in 5 seconds ...
	I0729 04:39:57.265271   19596 start.go:360] acquireMachinesLock for custom-flannel-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:39:57.265658   19596 start.go:364] duration metric: took 308.042µs to acquireMachinesLock for "custom-flannel-159000"
	I0729 04:39:57.265779   19596 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:39:57.265969   19596 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:39:57.271556   19596 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:39:57.315096   19596 start.go:159] libmachine.API.Create for "custom-flannel-159000" (driver="qemu2")
	I0729 04:39:57.315156   19596 client.go:168] LocalClient.Create starting
	I0729 04:39:57.315345   19596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:39:57.315414   19596 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:57.315433   19596 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:57.315490   19596 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:39:57.315539   19596 main.go:141] libmachine: Decoding PEM data...
	I0729 04:39:57.315555   19596 main.go:141] libmachine: Parsing certificate...
	I0729 04:39:57.316218   19596 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:39:57.471178   19596 main.go:141] libmachine: Creating SSH key...
	I0729 04:39:57.810511   19596 main.go:141] libmachine: Creating Disk image...
	I0729 04:39:57.810524   19596 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:39:57.810742   19596 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2
	I0729 04:39:57.820425   19596 main.go:141] libmachine: STDOUT: 
	I0729 04:39:57.820453   19596 main.go:141] libmachine: STDERR: 
	I0729 04:39:57.820519   19596 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2 +20000M
	I0729 04:39:57.828804   19596 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:39:57.828822   19596 main.go:141] libmachine: STDERR: 
	I0729 04:39:57.828838   19596 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2
	I0729 04:39:57.828844   19596 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:39:57.828855   19596 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:39:57.828889   19596 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:3a:54:2d:81:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/custom-flannel-159000/disk.qcow2
	I0729 04:39:57.830708   19596 main.go:141] libmachine: STDOUT: 
	I0729 04:39:57.830736   19596 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:39:57.830754   19596 client.go:171] duration metric: took 515.604542ms to LocalClient.Create
	I0729 04:39:59.832792   19596 start.go:128] duration metric: took 2.566830375s to createHost
	I0729 04:39:59.832806   19596 start.go:83] releasing machines lock for "custom-flannel-159000", held for 2.567194833s
	W0729 04:39:59.832912   19596 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:39:59.840881   19596 out.go:177] 
	W0729 04:39:59.845912   19596 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:39:59.845918   19596 out.go:239] * 
	* 
	W0729 04:39:59.846404   19596 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:39:59.857876   19596 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.785819083s)

                                                
                                                
-- stdout --
	* [false-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-159000" primary control-plane node in "false-159000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:40:02.194189   19723 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:40:02.194338   19723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:40:02.194346   19723 out.go:304] Setting ErrFile to fd 2...
	I0729 04:40:02.194349   19723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:40:02.194473   19723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:40:02.195549   19723 out.go:298] Setting JSON to false
	I0729 04:40:02.212088   19723 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9571,"bootTime":1722243631,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:40:02.212155   19723 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:40:02.217998   19723 out.go:177] * [false-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:40:02.223965   19723 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:40:02.224030   19723 notify.go:220] Checking for updates...
	I0729 04:40:02.232942   19723 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:40:02.235949   19723 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:40:02.239959   19723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:40:02.242996   19723 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:40:02.245909   19723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:40:02.249247   19723 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:40:02.249313   19723 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:40:02.249362   19723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:40:02.253889   19723 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:40:02.260939   19723 start.go:297] selected driver: qemu2
	I0729 04:40:02.260948   19723 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:40:02.260955   19723 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:40:02.263187   19723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:40:02.266810   19723 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:40:02.270055   19723 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:40:02.270076   19723 cni.go:84] Creating CNI manager for "false"
	I0729 04:40:02.270111   19723 start.go:340] cluster config:
	{Name:false-159000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:40:02.273617   19723 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:40:02.281880   19723 out.go:177] * Starting "false-159000" primary control-plane node in "false-159000" cluster
	I0729 04:40:02.285972   19723 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:40:02.285988   19723 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:40:02.286003   19723 cache.go:56] Caching tarball of preloaded images
	I0729 04:40:02.286067   19723 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:40:02.286076   19723 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:40:02.286157   19723 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/false-159000/config.json ...
	I0729 04:40:02.286173   19723 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/false-159000/config.json: {Name:mk960fd4ba9330734a402fd3306cbc1bde408f19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:40:02.286375   19723 start.go:360] acquireMachinesLock for false-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:40:02.286414   19723 start.go:364] duration metric: took 33.792µs to acquireMachinesLock for "false-159000"
	I0729 04:40:02.286424   19723 start.go:93] Provisioning new machine with config: &{Name:false-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:40:02.286455   19723 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:40:02.294916   19723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:40:02.309742   19723 start.go:159] libmachine.API.Create for "false-159000" (driver="qemu2")
	I0729 04:40:02.309767   19723 client.go:168] LocalClient.Create starting
	I0729 04:40:02.309831   19723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:40:02.309860   19723 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:02.309870   19723 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:02.309904   19723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:40:02.309929   19723 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:02.309937   19723 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:02.310271   19723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:40:02.460798   19723 main.go:141] libmachine: Creating SSH key...
	I0729 04:40:02.494987   19723 main.go:141] libmachine: Creating Disk image...
	I0729 04:40:02.494993   19723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:40:02.495207   19723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2
	I0729 04:40:02.504273   19723 main.go:141] libmachine: STDOUT: 
	I0729 04:40:02.504292   19723 main.go:141] libmachine: STDERR: 
	I0729 04:40:02.504342   19723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2 +20000M
	I0729 04:40:02.512261   19723 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:40:02.512276   19723 main.go:141] libmachine: STDERR: 
	I0729 04:40:02.512299   19723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2
	I0729 04:40:02.512305   19723 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:40:02.512316   19723 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:40:02.512343   19723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:bb:f8:07:a9:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2
	I0729 04:40:02.513957   19723 main.go:141] libmachine: STDOUT: 
	I0729 04:40:02.513970   19723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:40:02.513989   19723 client.go:171] duration metric: took 204.220792ms to LocalClient.Create
	I0729 04:40:04.516252   19723 start.go:128] duration metric: took 2.229805334s to createHost
	I0729 04:40:04.516355   19723 start.go:83] releasing machines lock for "false-159000", held for 2.22998525s
	W0729 04:40:04.516401   19723 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:04.532853   19723 out.go:177] * Deleting "false-159000" in qemu2 ...
	W0729 04:40:04.558132   19723 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:04.558158   19723 start.go:729] Will try again in 5 seconds ...
	I0729 04:40:09.560286   19723 start.go:360] acquireMachinesLock for false-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:40:09.560796   19723 start.go:364] duration metric: took 414.083µs to acquireMachinesLock for "false-159000"
	I0729 04:40:09.560857   19723 start.go:93] Provisioning new machine with config: &{Name:false-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:40:09.561015   19723 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:40:09.570518   19723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:40:09.613941   19723 start.go:159] libmachine.API.Create for "false-159000" (driver="qemu2")
	I0729 04:40:09.613993   19723 client.go:168] LocalClient.Create starting
	I0729 04:40:09.614118   19723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:40:09.614180   19723 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:09.614196   19723 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:09.614255   19723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:40:09.614299   19723 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:09.614314   19723 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:09.614857   19723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:40:09.770093   19723 main.go:141] libmachine: Creating SSH key...
	I0729 04:40:09.887710   19723 main.go:141] libmachine: Creating Disk image...
	I0729 04:40:09.887718   19723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:40:09.887933   19723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2
	I0729 04:40:09.897504   19723 main.go:141] libmachine: STDOUT: 
	I0729 04:40:09.897525   19723 main.go:141] libmachine: STDERR: 
	I0729 04:40:09.897572   19723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2 +20000M
	I0729 04:40:09.905424   19723 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:40:09.905437   19723 main.go:141] libmachine: STDERR: 
	I0729 04:40:09.905453   19723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2
	I0729 04:40:09.905459   19723 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:40:09.905473   19723 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:40:09.905501   19723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:a3:f6:d8:53:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/false-159000/disk.qcow2
	I0729 04:40:09.907081   19723 main.go:141] libmachine: STDOUT: 
	I0729 04:40:09.907100   19723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:40:09.907114   19723 client.go:171] duration metric: took 293.12275ms to LocalClient.Create
	I0729 04:40:11.909291   19723 start.go:128] duration metric: took 2.348295625s to createHost
	I0729 04:40:11.909531   19723 start.go:83] releasing machines lock for "false-159000", held for 2.348656084s
	W0729 04:40:11.909852   19723 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:11.921692   19723 out.go:177] 
	W0729 04:40:11.926728   19723 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:40:11.926756   19723 out.go:239] * 
	* 
	W0729 04:40:11.930015   19723 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:40:11.937647   19723 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.9215325s)

                                                
                                                
-- stdout --
	* [enable-default-cni-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-159000" primary control-plane node in "enable-default-cni-159000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:40:14.143611   19839 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:40:14.143956   19839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:40:14.143961   19839 out.go:304] Setting ErrFile to fd 2...
	I0729 04:40:14.143964   19839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:40:14.144155   19839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:40:14.145680   19839 out.go:298] Setting JSON to false
	I0729 04:40:14.161824   19839 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9583,"bootTime":1722243631,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:40:14.161895   19839 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:40:14.165600   19839 out.go:177] * [enable-default-cni-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:40:14.171512   19839 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:40:14.171562   19839 notify.go:220] Checking for updates...
	I0729 04:40:14.178486   19839 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:40:14.181443   19839 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:40:14.184552   19839 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:40:14.187370   19839 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:40:14.190468   19839 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:40:14.193924   19839 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:40:14.193988   19839 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:40:14.194054   19839 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:40:14.197375   19839 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:40:14.204482   19839 start.go:297] selected driver: qemu2
	I0729 04:40:14.204490   19839 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:40:14.204498   19839 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:40:14.206577   19839 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:40:14.209546   19839 out.go:177] * Automatically selected the socket_vmnet network
	E0729 04:40:14.212546   19839 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0729 04:40:14.212559   19839 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:40:14.212587   19839 cni.go:84] Creating CNI manager for "bridge"
	I0729 04:40:14.212594   19839 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:40:14.212619   19839 start.go:340] cluster config:
	{Name:enable-default-cni-159000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:40:14.216358   19839 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:40:14.224448   19839 out.go:177] * Starting "enable-default-cni-159000" primary control-plane node in "enable-default-cni-159000" cluster
	I0729 04:40:14.228509   19839 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:40:14.228527   19839 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:40:14.228536   19839 cache.go:56] Caching tarball of preloaded images
	I0729 04:40:14.228602   19839 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:40:14.228609   19839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:40:14.228673   19839 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/enable-default-cni-159000/config.json ...
	I0729 04:40:14.228685   19839 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/enable-default-cni-159000/config.json: {Name:mk54fdcde75b691cf7c75e9f64520f7942f0e52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:40:14.228907   19839 start.go:360] acquireMachinesLock for enable-default-cni-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:40:14.228943   19839 start.go:364] duration metric: took 25.292µs to acquireMachinesLock for "enable-default-cni-159000"
	I0729 04:40:14.228954   19839 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:40:14.228984   19839 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:40:14.237431   19839 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:40:14.252425   19839 start.go:159] libmachine.API.Create for "enable-default-cni-159000" (driver="qemu2")
	I0729 04:40:14.252445   19839 client.go:168] LocalClient.Create starting
	I0729 04:40:14.252502   19839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:40:14.252533   19839 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:14.252542   19839 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:14.252578   19839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:40:14.252602   19839 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:14.252610   19839 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:14.252968   19839 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:40:14.401097   19839 main.go:141] libmachine: Creating SSH key...
	I0729 04:40:14.536148   19839 main.go:141] libmachine: Creating Disk image...
	I0729 04:40:14.536157   19839 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:40:14.536388   19839 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2
	I0729 04:40:14.545577   19839 main.go:141] libmachine: STDOUT: 
	I0729 04:40:14.545597   19839 main.go:141] libmachine: STDERR: 
	I0729 04:40:14.545644   19839 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2 +20000M
	I0729 04:40:14.553554   19839 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:40:14.553565   19839 main.go:141] libmachine: STDERR: 
	I0729 04:40:14.553592   19839 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2
	I0729 04:40:14.553598   19839 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:40:14.553612   19839 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:40:14.553637   19839 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:1a:bf:e9:43:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2
	I0729 04:40:14.555242   19839 main.go:141] libmachine: STDOUT: 
	I0729 04:40:14.555260   19839 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:40:14.555282   19839 client.go:171] duration metric: took 302.84025ms to LocalClient.Create
	I0729 04:40:16.557439   19839 start.go:128] duration metric: took 2.328479083s to createHost
	I0729 04:40:16.557552   19839 start.go:83] releasing machines lock for "enable-default-cni-159000", held for 2.328655167s
	W0729 04:40:16.557608   19839 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:16.571195   19839 out.go:177] * Deleting "enable-default-cni-159000" in qemu2 ...
	W0729 04:40:16.596410   19839 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:16.596440   19839 start.go:729] Will try again in 5 seconds ...
	I0729 04:40:21.598584   19839 start.go:360] acquireMachinesLock for enable-default-cni-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:40:21.599002   19839 start.go:364] duration metric: took 334.792µs to acquireMachinesLock for "enable-default-cni-159000"
	I0729 04:40:21.599141   19839 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:40:21.599449   19839 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:40:21.605101   19839 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:40:21.654982   19839 start.go:159] libmachine.API.Create for "enable-default-cni-159000" (driver="qemu2")
	I0729 04:40:21.655038   19839 client.go:168] LocalClient.Create starting
	I0729 04:40:21.655171   19839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:40:21.655247   19839 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:21.655270   19839 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:21.655338   19839 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:40:21.655403   19839 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:21.655419   19839 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:21.655994   19839 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:40:21.816748   19839 main.go:141] libmachine: Creating SSH key...
	I0729 04:40:21.975875   19839 main.go:141] libmachine: Creating Disk image...
	I0729 04:40:21.975884   19839 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:40:21.976134   19839 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2
	I0729 04:40:21.986008   19839 main.go:141] libmachine: STDOUT: 
	I0729 04:40:21.986026   19839 main.go:141] libmachine: STDERR: 
	I0729 04:40:21.986074   19839 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2 +20000M
	I0729 04:40:21.994067   19839 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:40:21.994082   19839 main.go:141] libmachine: STDERR: 
	I0729 04:40:21.994093   19839 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2
	I0729 04:40:21.994107   19839 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:40:21.994117   19839 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:40:21.994155   19839 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:fe:fc:a4:d2:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/enable-default-cni-159000/disk.qcow2
	I0729 04:40:21.995841   19839 main.go:141] libmachine: STDOUT: 
	I0729 04:40:21.995856   19839 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:40:21.995868   19839 client.go:171] duration metric: took 340.833459ms to LocalClient.Create
	I0729 04:40:23.997937   19839 start.go:128] duration metric: took 2.398515292s to createHost
	I0729 04:40:23.998000   19839 start.go:83] releasing machines lock for "enable-default-cni-159000", held for 2.399037792s
	W0729 04:40:23.998140   19839 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:24.011473   19839 out.go:177] 
	W0729 04:40:24.016465   19839 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:40:24.016475   19839 out.go:239] * 
	* 
	W0729 04:40:24.017194   19839 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:40:24.025462   19839 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.822690208s)

                                                
                                                
-- stdout --
	* [flannel-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-159000" primary control-plane node in "flannel-159000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:40:26.170815   19948 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:40:26.170936   19948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:40:26.170939   19948 out.go:304] Setting ErrFile to fd 2...
	I0729 04:40:26.170942   19948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:40:26.171069   19948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:40:26.172151   19948 out.go:298] Setting JSON to false
	I0729 04:40:26.188492   19948 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9595,"bootTime":1722243631,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:40:26.188628   19948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:40:26.194534   19948 out.go:177] * [flannel-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:40:26.202499   19948 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:40:26.202585   19948 notify.go:220] Checking for updates...
	I0729 04:40:26.209315   19948 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:40:26.212480   19948 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:40:26.216479   19948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:40:26.218018   19948 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:40:26.221464   19948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:40:26.224868   19948 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:40:26.224931   19948 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:40:26.224998   19948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:40:26.229289   19948 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:40:26.236476   19948 start.go:297] selected driver: qemu2
	I0729 04:40:26.236481   19948 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:40:26.236487   19948 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:40:26.238606   19948 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:40:26.241482   19948 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:40:26.244533   19948 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:40:26.244562   19948 cni.go:84] Creating CNI manager for "flannel"
	I0729 04:40:26.244565   19948 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0729 04:40:26.244599   19948 start.go:340] cluster config:
	{Name:flannel-159000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:40:26.247972   19948 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:40:26.256465   19948 out.go:177] * Starting "flannel-159000" primary control-plane node in "flannel-159000" cluster
	I0729 04:40:26.260504   19948 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:40:26.260529   19948 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:40:26.260540   19948 cache.go:56] Caching tarball of preloaded images
	I0729 04:40:26.260595   19948 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:40:26.260600   19948 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:40:26.260657   19948 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/flannel-159000/config.json ...
	I0729 04:40:26.260667   19948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/flannel-159000/config.json: {Name:mk2aaee3eeb0b49d0b1a5da0acfeca25d855aa59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:40:26.260869   19948 start.go:360] acquireMachinesLock for flannel-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:40:26.260902   19948 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "flannel-159000"
	I0729 04:40:26.260913   19948 start.go:93] Provisioning new machine with config: &{Name:flannel-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:40:26.260934   19948 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:40:26.269443   19948 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:40:26.284549   19948 start.go:159] libmachine.API.Create for "flannel-159000" (driver="qemu2")
	I0729 04:40:26.284689   19948 client.go:168] LocalClient.Create starting
	I0729 04:40:26.284760   19948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:40:26.284789   19948 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:26.284801   19948 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:26.284838   19948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:40:26.284860   19948 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:26.284869   19948 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:26.285229   19948 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:40:26.437528   19948 main.go:141] libmachine: Creating SSH key...
	I0729 04:40:26.562685   19948 main.go:141] libmachine: Creating Disk image...
	I0729 04:40:26.562692   19948 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:40:26.562925   19948 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2
	I0729 04:40:26.572206   19948 main.go:141] libmachine: STDOUT: 
	I0729 04:40:26.572232   19948 main.go:141] libmachine: STDERR: 
	I0729 04:40:26.572285   19948 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2 +20000M
	I0729 04:40:26.580101   19948 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:40:26.580117   19948 main.go:141] libmachine: STDERR: 
	I0729 04:40:26.580141   19948 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2
	I0729 04:40:26.580147   19948 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:40:26.580159   19948 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:40:26.580183   19948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:85:4c:20:c7:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2
	I0729 04:40:26.581787   19948 main.go:141] libmachine: STDOUT: 
	I0729 04:40:26.581804   19948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:40:26.581825   19948 client.go:171] duration metric: took 297.074417ms to LocalClient.Create
	I0729 04:40:28.584466   19948 start.go:128] duration metric: took 2.32309s to createHost
	I0729 04:40:28.584549   19948 start.go:83] releasing machines lock for "flannel-159000", held for 2.323223875s
	W0729 04:40:28.584620   19948 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:28.600058   19948 out.go:177] * Deleting "flannel-159000" in qemu2 ...
	W0729 04:40:28.627050   19948 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:28.627087   19948 start.go:729] Will try again in 5 seconds ...
	I0729 04:40:33.630054   19948 start.go:360] acquireMachinesLock for flannel-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:40:33.630675   19948 start.go:364] duration metric: took 471.667µs to acquireMachinesLock for "flannel-159000"
	I0729 04:40:33.630832   19948 start.go:93] Provisioning new machine with config: &{Name:flannel-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:40:33.631058   19948 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:40:33.639614   19948 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:40:33.690213   19948 start.go:159] libmachine.API.Create for "flannel-159000" (driver="qemu2")
	I0729 04:40:33.690264   19948 client.go:168] LocalClient.Create starting
	I0729 04:40:33.690397   19948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:40:33.690456   19948 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:33.690474   19948 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:33.690536   19948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:40:33.690581   19948 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:33.690592   19948 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:33.691145   19948 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:40:33.848358   19948 main.go:141] libmachine: Creating SSH key...
	I0729 04:40:33.906550   19948 main.go:141] libmachine: Creating Disk image...
	I0729 04:40:33.906558   19948 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:40:33.906780   19948 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2
	I0729 04:40:33.916074   19948 main.go:141] libmachine: STDOUT: 
	I0729 04:40:33.916096   19948 main.go:141] libmachine: STDERR: 
	I0729 04:40:33.916150   19948 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2 +20000M
	I0729 04:40:33.924222   19948 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:40:33.924238   19948 main.go:141] libmachine: STDERR: 
	I0729 04:40:33.924250   19948 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2
	I0729 04:40:33.924253   19948 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:40:33.924270   19948 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:40:33.924305   19948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:f6:1b:60:0d:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/flannel-159000/disk.qcow2
	I0729 04:40:33.925980   19948 main.go:141] libmachine: STDOUT: 
	I0729 04:40:33.925998   19948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:40:33.926014   19948 client.go:171] duration metric: took 235.717917ms to LocalClient.Create
	I0729 04:40:35.928320   19948 start.go:128] duration metric: took 2.297005958s to createHost
	I0729 04:40:35.928363   19948 start.go:83] releasing machines lock for "flannel-159000", held for 2.297433875s
	W0729 04:40:35.928569   19948 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:35.939070   19948 out.go:177] 
	W0729 04:40:35.944043   19948 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:40:35.944061   19948 out.go:239] * 
	* 
	W0729 04:40:35.945158   19948 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:40:35.958015   19948 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.880968292s)

                                                
                                                
-- stdout --
	* [bridge-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-159000" primary control-plane node in "bridge-159000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:40:38.334272   20065 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:40:38.334396   20065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:40:38.334400   20065 out.go:304] Setting ErrFile to fd 2...
	I0729 04:40:38.334402   20065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:40:38.334557   20065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:40:38.335577   20065 out.go:298] Setting JSON to false
	I0729 04:40:38.352287   20065 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9607,"bootTime":1722243631,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:40:38.352358   20065 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:40:38.358949   20065 out.go:177] * [bridge-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:40:38.365773   20065 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:40:38.365817   20065 notify.go:220] Checking for updates...
	I0729 04:40:38.374692   20065 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:40:38.378713   20065 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:40:38.382590   20065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:40:38.385727   20065 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:40:38.388757   20065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:40:38.392041   20065 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:40:38.392110   20065 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:40:38.392162   20065 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:40:38.395690   20065 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:40:38.402746   20065 start.go:297] selected driver: qemu2
	I0729 04:40:38.402754   20065 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:40:38.402762   20065 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:40:38.405021   20065 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:40:38.407678   20065 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:40:38.411824   20065 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:40:38.411858   20065 cni.go:84] Creating CNI manager for "bridge"
	I0729 04:40:38.411867   20065 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:40:38.411892   20065 start.go:340] cluster config:
	{Name:bridge-159000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:40:38.415456   20065 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:40:38.423685   20065 out.go:177] * Starting "bridge-159000" primary control-plane node in "bridge-159000" cluster
	I0729 04:40:38.427820   20065 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:40:38.427873   20065 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:40:38.427892   20065 cache.go:56] Caching tarball of preloaded images
	I0729 04:40:38.428035   20065 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:40:38.428055   20065 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:40:38.428120   20065 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/bridge-159000/config.json ...
	I0729 04:40:38.428132   20065 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/bridge-159000/config.json: {Name:mk7f2cabff4d1cac690d5557057c7abd487b7c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:40:38.428386   20065 start.go:360] acquireMachinesLock for bridge-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:40:38.428419   20065 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "bridge-159000"
	I0729 04:40:38.428433   20065 start.go:93] Provisioning new machine with config: &{Name:bridge-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:40:38.428487   20065 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:40:38.432796   20065 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:40:38.449766   20065 start.go:159] libmachine.API.Create for "bridge-159000" (driver="qemu2")
	I0729 04:40:38.449790   20065 client.go:168] LocalClient.Create starting
	I0729 04:40:38.449873   20065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:40:38.449904   20065 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:38.449916   20065 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:38.449954   20065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:40:38.449977   20065 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:38.449983   20065 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:38.450340   20065 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:40:38.600562   20065 main.go:141] libmachine: Creating SSH key...
	I0729 04:40:38.792811   20065 main.go:141] libmachine: Creating Disk image...
	I0729 04:40:38.792824   20065 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:40:38.793072   20065 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2
	I0729 04:40:38.802800   20065 main.go:141] libmachine: STDOUT: 
	I0729 04:40:38.802820   20065 main.go:141] libmachine: STDERR: 
	I0729 04:40:38.802864   20065 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2 +20000M
	I0729 04:40:38.810655   20065 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:40:38.810670   20065 main.go:141] libmachine: STDERR: 
	I0729 04:40:38.810686   20065 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2
	I0729 04:40:38.810692   20065 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:40:38.810713   20065 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:40:38.810738   20065 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:b5:cf:71:10:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2
	I0729 04:40:38.812392   20065 main.go:141] libmachine: STDOUT: 
	I0729 04:40:38.812411   20065 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:40:38.812430   20065 client.go:171] duration metric: took 362.610583ms to LocalClient.Create
	I0729 04:40:40.814772   20065 start.go:128] duration metric: took 2.38609125s to createHost
	I0729 04:40:40.814847   20065 start.go:83] releasing machines lock for "bridge-159000", held for 2.386257s
	W0729 04:40:40.814964   20065 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:40.825147   20065 out.go:177] * Deleting "bridge-159000" in qemu2 ...
	W0729 04:40:40.853572   20065 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:40.853605   20065 start.go:729] Will try again in 5 seconds ...
	I0729 04:40:45.856036   20065 start.go:360] acquireMachinesLock for bridge-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:40:45.856717   20065 start.go:364] duration metric: took 547.083µs to acquireMachinesLock for "bridge-159000"
	I0729 04:40:45.856899   20065 start.go:93] Provisioning new machine with config: &{Name:bridge-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:40:45.857209   20065 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:40:45.862748   20065 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:40:45.911955   20065 start.go:159] libmachine.API.Create for "bridge-159000" (driver="qemu2")
	I0729 04:40:45.912177   20065 client.go:168] LocalClient.Create starting
	I0729 04:40:45.912350   20065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:40:45.912423   20065 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:45.912443   20065 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:45.912520   20065 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:40:45.912569   20065 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:45.912579   20065 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:45.913110   20065 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:40:46.071096   20065 main.go:141] libmachine: Creating SSH key...
	I0729 04:40:46.127469   20065 main.go:141] libmachine: Creating Disk image...
	I0729 04:40:46.127477   20065 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:40:46.127688   20065 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2
	I0729 04:40:46.137448   20065 main.go:141] libmachine: STDOUT: 
	I0729 04:40:46.137469   20065 main.go:141] libmachine: STDERR: 
	I0729 04:40:46.137548   20065 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2 +20000M
	I0729 04:40:46.145810   20065 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:40:46.145827   20065 main.go:141] libmachine: STDERR: 
	I0729 04:40:46.145837   20065 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2
	I0729 04:40:46.145842   20065 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:40:46.145849   20065 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:40:46.145877   20065 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:32:cc:46:b7:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/bridge-159000/disk.qcow2
	I0729 04:40:46.147585   20065 main.go:141] libmachine: STDOUT: 
	I0729 04:40:46.147602   20065 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:40:46.147614   20065 client.go:171] duration metric: took 235.414375ms to LocalClient.Create
	I0729 04:40:48.149768   20065 start.go:128] duration metric: took 2.292466083s to createHost
	I0729 04:40:48.149806   20065 start.go:83] releasing machines lock for "bridge-159000", held for 2.292955417s
	W0729 04:40:48.149957   20065 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:48.159294   20065 out.go:177] 
	W0729 04:40:48.166263   20065 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:40:48.166276   20065 out.go:239] * 
	* 
	W0729 04:40:48.167041   20065 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:40:48.178204   20065 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-159000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.152566125s)

                                                
                                                
-- stdout --
	* [kubenet-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-159000" primary control-plane node in "kubenet-159000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:40:50.360710   20179 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:40:50.360859   20179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:40:50.360862   20179 out.go:304] Setting ErrFile to fd 2...
	I0729 04:40:50.360864   20179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:40:50.361003   20179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:40:50.362107   20179 out.go:298] Setting JSON to false
	I0729 04:40:50.378778   20179 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9619,"bootTime":1722243631,"procs":503,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:40:50.378865   20179 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:40:50.385788   20179 out.go:177] * [kubenet-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:40:50.393722   20179 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:40:50.393768   20179 notify.go:220] Checking for updates...
	I0729 04:40:50.401516   20179 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:40:50.404720   20179 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:40:50.408661   20179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:40:50.410075   20179 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:40:50.412653   20179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:40:50.416007   20179 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:40:50.416080   20179 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:40:50.416128   20179 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:40:50.419529   20179 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:40:50.426646   20179 start.go:297] selected driver: qemu2
	I0729 04:40:50.426652   20179 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:40:50.426657   20179 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:40:50.428951   20179 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:40:50.432560   20179 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:40:50.435783   20179 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:40:50.435824   20179 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0729 04:40:50.435850   20179 start.go:340] cluster config:
	{Name:kubenet-159000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:40:50.439494   20179 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:40:50.446606   20179 out.go:177] * Starting "kubenet-159000" primary control-plane node in "kubenet-159000" cluster
	I0729 04:40:50.450648   20179 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:40:50.450662   20179 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:40:50.450669   20179 cache.go:56] Caching tarball of preloaded images
	I0729 04:40:50.450723   20179 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:40:50.450728   20179 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:40:50.450790   20179 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/kubenet-159000/config.json ...
	I0729 04:40:50.450800   20179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/kubenet-159000/config.json: {Name:mkc65a4d051353871119dbca81123a8355d79c98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:40:50.451170   20179 start.go:360] acquireMachinesLock for kubenet-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:40:50.451202   20179 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "kubenet-159000"
	I0729 04:40:50.451213   20179 start.go:93] Provisioning new machine with config: &{Name:kubenet-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:40:50.451244   20179 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:40:50.454763   20179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:40:50.470757   20179 start.go:159] libmachine.API.Create for "kubenet-159000" (driver="qemu2")
	I0729 04:40:50.470784   20179 client.go:168] LocalClient.Create starting
	I0729 04:40:50.470841   20179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:40:50.470872   20179 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:50.470881   20179 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:50.470918   20179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:40:50.470940   20179 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:50.470949   20179 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:50.471361   20179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:40:50.620927   20179 main.go:141] libmachine: Creating SSH key...
	I0729 04:40:50.773040   20179 main.go:141] libmachine: Creating Disk image...
	I0729 04:40:50.773047   20179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:40:50.773282   20179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2
	I0729 04:40:50.783308   20179 main.go:141] libmachine: STDOUT: 
	I0729 04:40:50.783330   20179 main.go:141] libmachine: STDERR: 
	I0729 04:40:50.783392   20179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2 +20000M
	I0729 04:40:50.791506   20179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:40:50.791522   20179 main.go:141] libmachine: STDERR: 
	I0729 04:40:50.791535   20179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2
	I0729 04:40:50.791540   20179 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:40:50.791555   20179 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:40:50.791582   20179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:1a:3a:0b:0a:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2
	I0729 04:40:50.793267   20179 main.go:141] libmachine: STDOUT: 
	I0729 04:40:50.793283   20179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:40:50.793301   20179 client.go:171] duration metric: took 322.506291ms to LocalClient.Create
	I0729 04:40:52.795563   20179 start.go:128] duration metric: took 2.344242833s to createHost
	I0729 04:40:52.795634   20179 start.go:83] releasing machines lock for "kubenet-159000", held for 2.344379167s
	W0729 04:40:52.795712   20179 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:52.807344   20179 out.go:177] * Deleting "kubenet-159000" in qemu2 ...
	W0729 04:40:52.838148   20179 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:40:52.838186   20179 start.go:729] Will try again in 5 seconds ...
	I0729 04:40:57.840470   20179 start.go:360] acquireMachinesLock for kubenet-159000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:40:57.840924   20179 start.go:364] duration metric: took 358.333µs to acquireMachinesLock for "kubenet-159000"
	I0729 04:40:57.840989   20179 start.go:93] Provisioning new machine with config: &{Name:kubenet-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:40:57.841172   20179 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:40:57.851871   20179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 04:40:57.892970   20179 start.go:159] libmachine.API.Create for "kubenet-159000" (driver="qemu2")
	I0729 04:40:57.893037   20179 client.go:168] LocalClient.Create starting
	I0729 04:40:57.893228   20179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:40:57.893294   20179 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:57.893308   20179 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:57.893366   20179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:40:57.893407   20179 main.go:141] libmachine: Decoding PEM data...
	I0729 04:40:57.893417   20179 main.go:141] libmachine: Parsing certificate...
	I0729 04:40:57.893882   20179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:40:58.049357   20179 main.go:141] libmachine: Creating SSH key...
	I0729 04:40:58.425732   20179 main.go:141] libmachine: Creating Disk image...
	I0729 04:40:58.425746   20179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:40:58.426021   20179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2
	I0729 04:40:58.435935   20179 main.go:141] libmachine: STDOUT: 
	I0729 04:40:58.435955   20179 main.go:141] libmachine: STDERR: 
	I0729 04:40:58.436016   20179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2 +20000M
	I0729 04:40:58.443960   20179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:40:58.443972   20179 main.go:141] libmachine: STDERR: 
	I0729 04:40:58.443985   20179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2
	I0729 04:40:58.443989   20179 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:40:58.444007   20179 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:40:58.444039   20179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:df:ef:d1:c0:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/kubenet-159000/disk.qcow2
	I0729 04:40:58.445685   20179 main.go:141] libmachine: STDOUT: 
	I0729 04:40:58.445699   20179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:40:58.445712   20179 client.go:171] duration metric: took 552.652292ms to LocalClient.Create
	I0729 04:41:00.447903   20179 start.go:128] duration metric: took 2.606693541s to createHost
	I0729 04:41:00.447963   20179 start.go:83] releasing machines lock for "kubenet-159000", held for 2.607014291s
	W0729 04:41:00.448370   20179 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:00.452213   20179 out.go:177] 
	W0729 04:41:00.459136   20179 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:41:00.459163   20179 out.go:239] * 
	* 
	W0729 04:41:00.461871   20179 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:41:00.470970   20179 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-623000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-623000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.772231333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-623000" primary control-plane node in "old-k8s-version-623000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-623000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:41:02.667366   20292 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:41:02.667493   20292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:02.667497   20292 out.go:304] Setting ErrFile to fd 2...
	I0729 04:41:02.667499   20292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:02.667644   20292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:41:02.668995   20292 out.go:298] Setting JSON to false
	I0729 04:41:02.687843   20292 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9631,"bootTime":1722243631,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:41:02.687935   20292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:41:02.693522   20292 out.go:177] * [old-k8s-version-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:41:02.699535   20292 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:41:02.699637   20292 notify.go:220] Checking for updates...
	I0729 04:41:02.707430   20292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:41:02.710511   20292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:41:02.714560   20292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:41:02.717438   20292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:41:02.720486   20292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:41:02.723875   20292 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:41:02.723951   20292 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:41:02.724008   20292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:41:02.731389   20292 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:41:02.738478   20292 start.go:297] selected driver: qemu2
	I0729 04:41:02.738488   20292 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:41:02.738496   20292 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:41:02.741116   20292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:41:02.744481   20292 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:41:02.748577   20292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:41:02.748593   20292 cni.go:84] Creating CNI manager for ""
	I0729 04:41:02.748602   20292 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 04:41:02.748628   20292 start.go:340] cluster config:
	{Name:old-k8s-version-623000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:02.752744   20292 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:02.759447   20292 out.go:177] * Starting "old-k8s-version-623000" primary control-plane node in "old-k8s-version-623000" cluster
	I0729 04:41:02.763418   20292 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:41:02.763454   20292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:41:02.763467   20292 cache.go:56] Caching tarball of preloaded images
	I0729 04:41:02.763569   20292 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:41:02.763585   20292 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 04:41:02.763649   20292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/old-k8s-version-623000/config.json ...
	I0729 04:41:02.763661   20292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/old-k8s-version-623000/config.json: {Name:mk10d5eb73ee6055d237c79d76ae184bfed3a360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:41:02.763934   20292 start.go:360] acquireMachinesLock for old-k8s-version-623000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:02.763966   20292 start.go:364] duration metric: took 26.291µs to acquireMachinesLock for "old-k8s-version-623000"
	I0729 04:41:02.763978   20292 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:41:02.764016   20292 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:41:02.768507   20292 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:41:02.785109   20292 start.go:159] libmachine.API.Create for "old-k8s-version-623000" (driver="qemu2")
	I0729 04:41:02.785137   20292 client.go:168] LocalClient.Create starting
	I0729 04:41:02.785221   20292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:41:02.785255   20292 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:02.785265   20292 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:02.785307   20292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:41:02.785329   20292 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:02.785335   20292 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:02.785740   20292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:41:02.934823   20292 main.go:141] libmachine: Creating SSH key...
	I0729 04:41:03.018249   20292 main.go:141] libmachine: Creating Disk image...
	I0729 04:41:03.018262   20292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:41:03.018494   20292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2
	I0729 04:41:03.027991   20292 main.go:141] libmachine: STDOUT: 
	I0729 04:41:03.028010   20292 main.go:141] libmachine: STDERR: 
	I0729 04:41:03.028071   20292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2 +20000M
	I0729 04:41:03.035993   20292 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:41:03.036007   20292 main.go:141] libmachine: STDERR: 
	I0729 04:41:03.036018   20292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2
	I0729 04:41:03.036021   20292 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:41:03.036044   20292 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:03.036077   20292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:2c:e6:e1:f6:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2
	I0729 04:41:03.037731   20292 main.go:141] libmachine: STDOUT: 
	I0729 04:41:03.037748   20292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:03.037778   20292 client.go:171] duration metric: took 252.637042ms to LocalClient.Create
	I0729 04:41:05.039971   20292 start.go:128] duration metric: took 2.275936792s to createHost
	I0729 04:41:05.040099   20292 start.go:83] releasing machines lock for "old-k8s-version-623000", held for 2.2761195s
	W0729 04:41:05.040189   20292 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:05.047501   20292 out.go:177] * Deleting "old-k8s-version-623000" in qemu2 ...
	W0729 04:41:05.078985   20292 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:05.079036   20292 start.go:729] Will try again in 5 seconds ...
	I0729 04:41:10.081283   20292 start.go:360] acquireMachinesLock for old-k8s-version-623000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:10.081826   20292 start.go:364] duration metric: took 438.375µs to acquireMachinesLock for "old-k8s-version-623000"
	I0729 04:41:10.081968   20292 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:41:10.082275   20292 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:41:10.087744   20292 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:41:10.137641   20292 start.go:159] libmachine.API.Create for "old-k8s-version-623000" (driver="qemu2")
	I0729 04:41:10.137706   20292 client.go:168] LocalClient.Create starting
	I0729 04:41:10.137838   20292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:41:10.137906   20292 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:10.137930   20292 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:10.137991   20292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:41:10.138036   20292 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:10.138052   20292 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:10.138649   20292 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:41:10.295611   20292 main.go:141] libmachine: Creating SSH key...
	I0729 04:41:10.351357   20292 main.go:141] libmachine: Creating Disk image...
	I0729 04:41:10.351362   20292 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:41:10.351572   20292 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2
	I0729 04:41:10.360762   20292 main.go:141] libmachine: STDOUT: 
	I0729 04:41:10.360780   20292 main.go:141] libmachine: STDERR: 
	I0729 04:41:10.360839   20292 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2 +20000M
	I0729 04:41:10.368712   20292 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:41:10.368728   20292 main.go:141] libmachine: STDERR: 
	I0729 04:41:10.368739   20292 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2
	I0729 04:41:10.368744   20292 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:41:10.368755   20292 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:10.368783   20292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:58:21:a5:bc:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2
	I0729 04:41:10.370604   20292 main.go:141] libmachine: STDOUT: 
	I0729 04:41:10.370622   20292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:10.370636   20292 client.go:171] duration metric: took 232.924834ms to LocalClient.Create
	I0729 04:41:12.372117   20292 start.go:128] duration metric: took 2.289831417s to createHost
	I0729 04:41:12.372181   20292 start.go:83] releasing machines lock for "old-k8s-version-623000", held for 2.290359708s
	W0729 04:41:12.372386   20292 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-623000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-623000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:12.383878   20292 out.go:177] 
	W0729 04:41:12.387815   20292 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:41:12.387830   20292 out.go:239] * 
	* 
	W0729 04:41:12.389447   20292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:41:12.400599   20292 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-623000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000: exit status 7 (53.348584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-623000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-623000 create -f testdata/busybox.yaml: exit status 1 (28.832667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-623000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-623000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000: exit status 7 (29.041291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-623000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000: exit status 7 (29.15475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-623000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-623000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-623000 describe deploy/metrics-server -n kube-system: exit status 1 (27.661125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-623000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-623000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000: exit status 7 (28.893125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-623000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-623000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.195527292s)

                                                
                                                
-- stdout --
	* [old-k8s-version-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-623000" primary control-plane node in "old-k8s-version-623000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-623000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-623000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:41:16.097174   20346 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:41:16.097297   20346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:16.097301   20346 out.go:304] Setting ErrFile to fd 2...
	I0729 04:41:16.097303   20346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:16.097424   20346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:41:16.098497   20346 out.go:298] Setting JSON to false
	I0729 04:41:16.115112   20346 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9645,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:41:16.115183   20346 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:41:16.118484   20346 out.go:177] * [old-k8s-version-623000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:41:16.126435   20346 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:41:16.126495   20346 notify.go:220] Checking for updates...
	I0729 04:41:16.132338   20346 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:41:16.135418   20346 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:41:16.138450   20346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:41:16.139731   20346 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:41:16.142398   20346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:41:16.145749   20346 config.go:182] Loaded profile config "old-k8s-version-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 04:41:16.149376   20346 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 04:41:16.152437   20346 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:41:16.156389   20346 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:41:16.163398   20346 start.go:297] selected driver: qemu2
	I0729 04:41:16.163406   20346 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:16.163469   20346 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:41:16.165776   20346 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:41:16.165815   20346 cni.go:84] Creating CNI manager for ""
	I0729 04:41:16.165821   20346 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 04:41:16.165848   20346 start.go:340] cluster config:
	{Name:old-k8s-version-623000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:16.169281   20346 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:16.177346   20346 out.go:177] * Starting "old-k8s-version-623000" primary control-plane node in "old-k8s-version-623000" cluster
	I0729 04:41:16.181472   20346 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:41:16.181484   20346 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:41:16.181491   20346 cache.go:56] Caching tarball of preloaded images
	I0729 04:41:16.181543   20346 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:41:16.181548   20346 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 04:41:16.181598   20346 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/old-k8s-version-623000/config.json ...
	I0729 04:41:16.182069   20346 start.go:360] acquireMachinesLock for old-k8s-version-623000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:16.182095   20346 start.go:364] duration metric: took 20.125µs to acquireMachinesLock for "old-k8s-version-623000"
	I0729 04:41:16.182104   20346 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:41:16.182109   20346 fix.go:54] fixHost starting: 
	I0729 04:41:16.182218   20346 fix.go:112] recreateIfNeeded on old-k8s-version-623000: state=Stopped err=<nil>
	W0729 04:41:16.182226   20346 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:41:16.186401   20346 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-623000" ...
	I0729 04:41:16.193464   20346 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:16.193498   20346 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:58:21:a5:bc:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2
	I0729 04:41:16.195503   20346 main.go:141] libmachine: STDOUT: 
	I0729 04:41:16.195520   20346 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:16.195545   20346 fix.go:56] duration metric: took 13.436583ms for fixHost
	I0729 04:41:16.195549   20346 start.go:83] releasing machines lock for "old-k8s-version-623000", held for 13.450541ms
	W0729 04:41:16.195562   20346 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:41:16.195593   20346 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:16.195597   20346 start.go:729] Will try again in 5 seconds ...
	I0729 04:41:21.197741   20346 start.go:360] acquireMachinesLock for old-k8s-version-623000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:21.198311   20346 start.go:364] duration metric: took 441.208µs to acquireMachinesLock for "old-k8s-version-623000"
	I0729 04:41:21.198479   20346 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:41:21.198501   20346 fix.go:54] fixHost starting: 
	I0729 04:41:21.199256   20346 fix.go:112] recreateIfNeeded on old-k8s-version-623000: state=Stopped err=<nil>
	W0729 04:41:21.199284   20346 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:41:21.205066   20346 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-623000" ...
	I0729 04:41:21.220105   20346 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:21.220401   20346 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:58:21:a5:bc:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/old-k8s-version-623000/disk.qcow2
	I0729 04:41:21.229702   20346 main.go:141] libmachine: STDOUT: 
	I0729 04:41:21.229754   20346 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:21.229835   20346 fix.go:56] duration metric: took 31.338375ms for fixHost
	I0729 04:41:21.229888   20346 start.go:83] releasing machines lock for "old-k8s-version-623000", held for 31.519ms
	W0729 04:41:21.230062   20346 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-623000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-623000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:21.237015   20346 out.go:177] 
	W0729 04:41:21.241044   20346 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:41:21.241064   20346 out.go:239] * 
	* 
	W0729 04:41:21.242340   20346 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:41:21.251962   20346 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-623000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000: exit status 7 (59.426708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-623000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000: exit status 7 (31.830792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-623000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-623000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-623000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.975167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-623000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-623000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000: exit status 7 (28.718875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-623000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000: exit status 7 (29.6155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-623000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-623000 --alsologtostderr -v=1: exit status 83 (40.162583ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-623000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-623000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:41:21.510089   20367 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:41:21.510944   20367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:21.510948   20367 out.go:304] Setting ErrFile to fd 2...
	I0729 04:41:21.510950   20367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:21.511062   20367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:41:21.511259   20367 out.go:298] Setting JSON to false
	I0729 04:41:21.511266   20367 mustload.go:65] Loading cluster: old-k8s-version-623000
	I0729 04:41:21.511450   20367 config.go:182] Loaded profile config "old-k8s-version-623000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0729 04:41:21.515392   20367 out.go:177] * The control-plane node old-k8s-version-623000 host is not running: state=Stopped
	I0729 04:41:21.519326   20367 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-623000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-623000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000: exit status 7 (28.086375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-623000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000: exit status 7 (28.063542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-623000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.881278875s)

                                                
                                                
-- stdout --
	* [no-preload-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-265000" primary control-plane node in "no-preload-265000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-265000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:41:21.825975   20384 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:41:21.826112   20384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:21.826116   20384 out.go:304] Setting ErrFile to fd 2...
	I0729 04:41:21.826118   20384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:21.826263   20384 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:41:21.827440   20384 out.go:298] Setting JSON to false
	I0729 04:41:21.844691   20384 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9650,"bootTime":1722243631,"procs":498,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:41:21.844761   20384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:41:21.849214   20384 out.go:177] * [no-preload-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:41:21.857228   20384 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:41:21.857260   20384 notify.go:220] Checking for updates...
	I0729 04:41:21.864126   20384 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:41:21.865658   20384 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:41:21.870154   20384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:41:21.873159   20384 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:41:21.874256   20384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:41:21.877520   20384 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:41:21.877584   20384 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:41:21.877633   20384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:41:21.881139   20384 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:41:21.886180   20384 start.go:297] selected driver: qemu2
	I0729 04:41:21.886188   20384 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:41:21.886194   20384 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:41:21.888393   20384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:41:21.891191   20384 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:41:21.895078   20384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:41:21.895107   20384 cni.go:84] Creating CNI manager for ""
	I0729 04:41:21.895116   20384 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:41:21.895120   20384 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:41:21.895154   20384 start.go:340] cluster config:
	{Name:no-preload-265000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:21.898721   20384 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:21.907160   20384 out.go:177] * Starting "no-preload-265000" primary control-plane node in "no-preload-265000" cluster
	I0729 04:41:21.911195   20384 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:41:21.911305   20384 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/no-preload-265000/config.json ...
	I0729 04:41:21.911303   20384 cache.go:107] acquiring lock: {Name:mk8842ae6ad28a24fa503e66c7d7e0f4e6e478af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:21.911306   20384 cache.go:107] acquiring lock: {Name:mk899f9a594768a2184e26b206c707132da4274d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:21.911320   20384 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/no-preload-265000/config.json: {Name:mkbdc04a39336d7fb3982752e516e82257398346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:41:21.911329   20384 cache.go:107] acquiring lock: {Name:mk28200d3381776576653142c4c685edd08ef9f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:21.911334   20384 cache.go:107] acquiring lock: {Name:mk503ec5c5eec1785a0b6d15fd504cdd12d81e7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:21.911338   20384 cache.go:107] acquiring lock: {Name:mk1401097a3625fcfc93e0ec4b7d43f70b490ef7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:21.911456   20384 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 04:41:21.911486   20384 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 04:41:21.911501   20384 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 04:41:21.911510   20384 cache.go:107] acquiring lock: {Name:mk4ced7186c686211c3b1b988c0aac113a85affa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:21.911565   20384 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 04:41:21.911566   20384 cache.go:107] acquiring lock: {Name:mk63f78fab619bfea21b5693ae87e8cc5f7577a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:21.911589   20384 cache.go:107] acquiring lock: {Name:mk7155f8a14bb7598198a4dc3781f0f9bffb9786 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:21.911640   20384 cache.go:115] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 04:41:21.911660   20384 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 353.958µs
	I0729 04:41:21.911674   20384 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 04:41:21.911683   20384 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 04:41:21.911763   20384 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 04:41:21.911762   20384 start.go:360] acquireMachinesLock for no-preload-265000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:21.911773   20384 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 04:41:21.911805   20384 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "no-preload-265000"
	I0729 04:41:21.911817   20384 start.go:93] Provisioning new machine with config: &{Name:no-preload-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:41:21.911865   20384 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:41:21.916122   20384 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:41:21.918826   20384 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 04:41:21.918958   20384 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 04:41:21.919122   20384 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 04:41:21.919557   20384 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 04:41:21.919717   20384 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 04:41:21.919760   20384 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 04:41:21.919788   20384 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 04:41:21.932734   20384 start.go:159] libmachine.API.Create for "no-preload-265000" (driver="qemu2")
	I0729 04:41:21.932752   20384 client.go:168] LocalClient.Create starting
	I0729 04:41:21.932861   20384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:41:21.932893   20384 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:21.932902   20384 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:21.932942   20384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:41:21.932965   20384 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:21.932973   20384 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:21.933356   20384 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:41:22.090417   20384 main.go:141] libmachine: Creating SSH key...
	I0729 04:41:22.157905   20384 main.go:141] libmachine: Creating Disk image...
	I0729 04:41:22.157922   20384 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:41:22.158172   20384 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2
	I0729 04:41:22.168116   20384 main.go:141] libmachine: STDOUT: 
	I0729 04:41:22.168152   20384 main.go:141] libmachine: STDERR: 
	I0729 04:41:22.168205   20384 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2 +20000M
	I0729 04:41:22.177273   20384 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:41:22.177290   20384 main.go:141] libmachine: STDERR: 
	I0729 04:41:22.177303   20384 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2
	I0729 04:41:22.177306   20384 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:41:22.177319   20384 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:22.177345   20384 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:bd:29:e3:bb:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2
	I0729 04:41:22.179283   20384 main.go:141] libmachine: STDOUT: 
	I0729 04:41:22.179304   20384 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:22.179322   20384 client.go:171] duration metric: took 246.570791ms to LocalClient.Create
	I0729 04:41:22.340433   20384 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 04:41:22.341277   20384 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 04:41:22.345114   20384 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0729 04:41:22.364207   20384 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 04:41:22.366861   20384 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0729 04:41:22.384068   20384 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 04:41:22.389352   20384 cache.go:162] opening:  /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 04:41:22.469168   20384 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 04:41:22.469182   20384 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 557.681292ms
	I0729 04:41:22.469191   20384 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 04:41:24.179460   20384 start.go:128] duration metric: took 2.267619417s to createHost
	I0729 04:41:24.179482   20384 start.go:83] releasing machines lock for "no-preload-265000", held for 2.267713333s
	W0729 04:41:24.179501   20384 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:24.190932   20384 out.go:177] * Deleting "no-preload-265000" in qemu2 ...
	W0729 04:41:24.203388   20384 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:24.203397   20384 start.go:729] Will try again in 5 seconds ...
	I0729 04:41:24.849928   20384 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 04:41:24.849950   20384 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.938501167s
	I0729 04:41:24.849959   20384 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 04:41:26.007854   20384 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 04:41:26.007876   20384 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 4.096636542s
	I0729 04:41:26.007891   20384 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 04:41:26.422253   20384 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 04:41:26.422276   20384 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.511023417s
	I0729 04:41:26.422287   20384 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 04:41:26.499480   20384 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 04:41:26.499495   20384 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.588280333s
	I0729 04:41:26.499511   20384 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 04:41:26.536198   20384 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 04:41:26.536214   20384 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.6249685s
	I0729 04:41:26.536225   20384 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 04:41:29.203445   20384 start.go:360] acquireMachinesLock for no-preload-265000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:29.203686   20384 start.go:364] duration metric: took 203.125µs to acquireMachinesLock for "no-preload-265000"
	I0729 04:41:29.203727   20384 start.go:93] Provisioning new machine with config: &{Name:no-preload-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:41:29.203825   20384 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:41:29.212231   20384 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:41:29.247575   20384 start.go:159] libmachine.API.Create for "no-preload-265000" (driver="qemu2")
	I0729 04:41:29.247631   20384 client.go:168] LocalClient.Create starting
	I0729 04:41:29.247792   20384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:41:29.247868   20384 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:29.247884   20384 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:29.247947   20384 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:41:29.247999   20384 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:29.248013   20384 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:29.248507   20384 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:41:29.403809   20384 main.go:141] libmachine: Creating SSH key...
	I0729 04:41:29.613183   20384 main.go:141] libmachine: Creating Disk image...
	I0729 04:41:29.613191   20384 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:41:29.613409   20384 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2
	I0729 04:41:29.623344   20384 main.go:141] libmachine: STDOUT: 
	I0729 04:41:29.623381   20384 main.go:141] libmachine: STDERR: 
	I0729 04:41:29.623443   20384 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2 +20000M
	I0729 04:41:29.631706   20384 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:41:29.631737   20384 main.go:141] libmachine: STDERR: 
	I0729 04:41:29.631752   20384 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2
	I0729 04:41:29.631756   20384 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:41:29.631772   20384 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:29.631812   20384 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:f2:fd:57:7c:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2
	I0729 04:41:29.633643   20384 main.go:141] libmachine: STDOUT: 
	I0729 04:41:29.633666   20384 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:29.633680   20384 client.go:171] duration metric: took 386.018917ms to LocalClient.Create
	I0729 04:41:30.142704   20384 cache.go:157] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 04:41:30.142733   20384 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 8.23131325s
	I0729 04:41:30.142745   20384 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 04:41:30.142771   20384 cache.go:87] Successfully saved all images to host disk.
	I0729 04:41:31.635664   20384 start.go:128] duration metric: took 2.431836083s to createHost
	I0729 04:41:31.635738   20384 start.go:83] releasing machines lock for "no-preload-265000", held for 2.432085833s
	W0729 04:41:31.636152   20384 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:31.644511   20384 out.go:177] 
	W0729 04:41:31.651661   20384 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:41:31.651679   20384 out.go:239] * 
	* 
	W0729 04:41:31.653683   20384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:41:31.663668   20384 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (61.734833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-265000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-265000 create -f testdata/busybox.yaml: exit status 1 (29.898875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-265000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-265000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (29.450292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (28.487084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-265000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-265000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-265000 describe deploy/metrics-server -n kube-system: exit status 1 (27.326708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-265000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-265000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (29.335333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.177720959s)

                                                
                                                
-- stdout --
	* [no-preload-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-265000" primary control-plane node in "no-preload-265000" cluster
	* Restarting existing qemu2 VM for "no-preload-265000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-265000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:41:36.066829   20466 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:41:36.066960   20466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:36.066963   20466 out.go:304] Setting ErrFile to fd 2...
	I0729 04:41:36.066966   20466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:36.067110   20466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:41:36.068172   20466 out.go:298] Setting JSON to false
	I0729 04:41:36.084801   20466 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9665,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:41:36.084880   20466 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:41:36.088713   20466 out.go:177] * [no-preload-265000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:41:36.095552   20466 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:41:36.095600   20466 notify.go:220] Checking for updates...
	I0729 04:41:36.103687   20466 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:41:36.106665   20466 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:41:36.109788   20466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:41:36.112658   20466 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:41:36.115551   20466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:41:36.118894   20466 config.go:182] Loaded profile config "no-preload-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 04:41:36.119169   20466 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:41:36.127215   20466 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:41:36.134671   20466 start.go:297] selected driver: qemu2
	I0729 04:41:36.134679   20466 start.go:901] validating driver "qemu2" against &{Name:no-preload-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-265000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:36.134723   20466 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:41:36.137326   20466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:41:36.137371   20466 cni.go:84] Creating CNI manager for ""
	I0729 04:41:36.137378   20466 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:41:36.137403   20466 start.go:340] cluster config:
	{Name:no-preload-265000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-265000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:36.140907   20466 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:36.149634   20466 out.go:177] * Starting "no-preload-265000" primary control-plane node in "no-preload-265000" cluster
	I0729 04:41:36.153618   20466 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:41:36.153680   20466 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/no-preload-265000/config.json ...
	I0729 04:41:36.153701   20466 cache.go:107] acquiring lock: {Name:mk899f9a594768a2184e26b206c707132da4274d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:36.153716   20466 cache.go:107] acquiring lock: {Name:mk8842ae6ad28a24fa503e66c7d7e0f4e6e478af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:36.153754   20466 cache.go:115] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0729 04:41:36.153760   20466 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 60.459µs
	I0729 04:41:36.153766   20466 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0729 04:41:36.153771   20466 cache.go:115] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0729 04:41:36.153774   20466 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 64.875µs
	I0729 04:41:36.153778   20466 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0729 04:41:36.153775   20466 cache.go:107] acquiring lock: {Name:mk4ced7186c686211c3b1b988c0aac113a85affa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:36.153786   20466 cache.go:107] acquiring lock: {Name:mk503ec5c5eec1785a0b6d15fd504cdd12d81e7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:36.153810   20466 cache.go:115] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0729 04:41:36.153816   20466 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 41.125µs
	I0729 04:41:36.153819   20466 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0729 04:41:36.153824   20466 cache.go:115] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0729 04:41:36.153824   20466 cache.go:107] acquiring lock: {Name:mk63f78fab619bfea21b5693ae87e8cc5f7577a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:36.153833   20466 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 43µs
	I0729 04:41:36.153839   20466 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0729 04:41:36.153810   20466 cache.go:107] acquiring lock: {Name:mk28200d3381776576653142c4c685edd08ef9f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:36.153855   20466 cache.go:115] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0729 04:41:36.153860   20466 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 35.709µs
	I0729 04:41:36.153863   20466 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0729 04:41:36.153873   20466 cache.go:107] acquiring lock: {Name:mk1401097a3625fcfc93e0ec4b7d43f70b490ef7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:36.153897   20466 cache.go:115] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0729 04:41:36.153902   20466 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 118.75µs
	I0729 04:41:36.153905   20466 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0729 04:41:36.153909   20466 cache.go:107] acquiring lock: {Name:mk7155f8a14bb7598198a4dc3781f0f9bffb9786 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:36.153940   20466 cache.go:115] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0729 04:41:36.153949   20466 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 96.291µs
	I0729 04:41:36.153952   20466 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0729 04:41:36.153963   20466 cache.go:115] /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0729 04:41:36.153969   20466 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 129.666µs
	I0729 04:41:36.153975   20466 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0729 04:41:36.153979   20466 cache.go:87] Successfully saved all images to host disk.
	I0729 04:41:36.154075   20466 start.go:360] acquireMachinesLock for no-preload-265000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:36.154103   20466 start.go:364] duration metric: took 22.625µs to acquireMachinesLock for "no-preload-265000"
	I0729 04:41:36.154112   20466 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:41:36.154119   20466 fix.go:54] fixHost starting: 
	I0729 04:41:36.154223   20466 fix.go:112] recreateIfNeeded on no-preload-265000: state=Stopped err=<nil>
	W0729 04:41:36.154230   20466 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:41:36.162643   20466 out.go:177] * Restarting existing qemu2 VM for "no-preload-265000" ...
	I0729 04:41:36.166507   20466 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:36.166540   20466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:f2:fd:57:7c:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2
	I0729 04:41:36.168465   20466 main.go:141] libmachine: STDOUT: 
	I0729 04:41:36.168481   20466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:36.168506   20466 fix.go:56] duration metric: took 14.388583ms for fixHost
	I0729 04:41:36.168509   20466 start.go:83] releasing machines lock for "no-preload-265000", held for 14.403334ms
	W0729 04:41:36.168514   20466 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:41:36.168539   20466 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:36.168543   20466 start.go:729] Will try again in 5 seconds ...
	I0729 04:41:41.169037   20466 start.go:360] acquireMachinesLock for no-preload-265000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:41.169348   20466 start.go:364] duration metric: took 244.5µs to acquireMachinesLock for "no-preload-265000"
	I0729 04:41:41.169439   20466 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:41:41.169456   20466 fix.go:54] fixHost starting: 
	I0729 04:41:41.169893   20466 fix.go:112] recreateIfNeeded on no-preload-265000: state=Stopped err=<nil>
	W0729 04:41:41.169915   20466 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:41:41.173245   20466 out.go:177] * Restarting existing qemu2 VM for "no-preload-265000" ...
	I0729 04:41:41.179345   20466 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:41.179555   20466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:f2:fd:57:7c:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/no-preload-265000/disk.qcow2
	I0729 04:41:41.185843   20466 main.go:141] libmachine: STDOUT: 
	I0729 04:41:41.185883   20466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:41.185926   20466 fix.go:56] duration metric: took 16.471833ms for fixHost
	I0729 04:41:41.185941   20466 start.go:83] releasing machines lock for "no-preload-265000", held for 16.571041ms
	W0729 04:41:41.186046   20466 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-265000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-265000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:41.193225   20466 out.go:177] 
	W0729 04:41:41.196282   20466 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:41:41.196307   20466 out.go:239] * 
	* 
	W0729 04:41:41.197548   20466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:41:41.211362   20466 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-265000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (51.715083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-265000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (30.837375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-265000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-265000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-265000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.538792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-265000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-265000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (29.195167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-265000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (29.977125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-265000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-265000 --alsologtostderr -v=1: exit status 83 (41.554208ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-265000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-265000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:41:41.455255   20486 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:41:41.455445   20486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:41.455448   20486 out.go:304] Setting ErrFile to fd 2...
	I0729 04:41:41.455450   20486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:41.455586   20486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:41:41.455817   20486 out.go:298] Setting JSON to false
	I0729 04:41:41.455826   20486 mustload.go:65] Loading cluster: no-preload-265000
	I0729 04:41:41.456014   20486 config.go:182] Loaded profile config "no-preload-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 04:41:41.460884   20486 out.go:177] * The control-plane node no-preload-265000 host is not running: state=Stopped
	I0729 04:41:41.464891   20486 out.go:177]   To start a cluster, run: "minikube start -p no-preload-265000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-265000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (28.379625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (29.286291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-265000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.962096625s)

                                                
                                                
-- stdout --
	* [embed-certs-846000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-846000" primary control-plane node in "embed-certs-846000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-846000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:41:41.765778   20503 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:41:41.765912   20503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:41.765915   20503 out.go:304] Setting ErrFile to fd 2...
	I0729 04:41:41.765917   20503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:41.766052   20503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:41:41.767208   20503 out.go:298] Setting JSON to false
	I0729 04:41:41.783602   20503 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9670,"bootTime":1722243631,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:41:41.783669   20503 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:41:41.788887   20503 out.go:177] * [embed-certs-846000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:41:41.795010   20503 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:41:41.795112   20503 notify.go:220] Checking for updates...
	I0729 04:41:41.801980   20503 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:41:41.804933   20503 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:41:41.807982   20503 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:41:41.810933   20503 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:41:41.813968   20503 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:41:41.817268   20503 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:41:41.817331   20503 config.go:182] Loaded profile config "stopped-upgrade-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0729 04:41:41.817374   20503 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:41:41.820913   20503 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:41:41.827858   20503 start.go:297] selected driver: qemu2
	I0729 04:41:41.827864   20503 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:41:41.827869   20503 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:41:41.830058   20503 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:41:41.832952   20503 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:41:41.837002   20503 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:41:41.837042   20503 cni.go:84] Creating CNI manager for ""
	I0729 04:41:41.837049   20503 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:41:41.837052   20503 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:41:41.837082   20503 start.go:340] cluster config:
	{Name:embed-certs-846000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:41.840541   20503 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:41.849023   20503 out.go:177] * Starting "embed-certs-846000" primary control-plane node in "embed-certs-846000" cluster
	I0729 04:41:41.852930   20503 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:41:41.852942   20503 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:41:41.852950   20503 cache.go:56] Caching tarball of preloaded images
	I0729 04:41:41.852995   20503 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:41:41.853001   20503 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:41:41.853048   20503 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/embed-certs-846000/config.json ...
	I0729 04:41:41.853058   20503 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/embed-certs-846000/config.json: {Name:mk1b7011b21cba94990ac8b1f0b24779507515cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:41:41.853313   20503 start.go:360] acquireMachinesLock for embed-certs-846000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:41.853346   20503 start.go:364] duration metric: took 24.666µs to acquireMachinesLock for "embed-certs-846000"
	I0729 04:41:41.853358   20503 start.go:93] Provisioning new machine with config: &{Name:embed-certs-846000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:41:41.853383   20503 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:41:41.858909   20503 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:41:41.873804   20503 start.go:159] libmachine.API.Create for "embed-certs-846000" (driver="qemu2")
	I0729 04:41:41.873829   20503 client.go:168] LocalClient.Create starting
	I0729 04:41:41.873894   20503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:41:41.873925   20503 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:41.873937   20503 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:41.873974   20503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:41:41.873998   20503 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:41.874009   20503 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:41.874481   20503 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:41:42.022801   20503 main.go:141] libmachine: Creating SSH key...
	I0729 04:41:42.191033   20503 main.go:141] libmachine: Creating Disk image...
	I0729 04:41:42.191046   20503 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:41:42.191283   20503 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2
	I0729 04:41:42.200949   20503 main.go:141] libmachine: STDOUT: 
	I0729 04:41:42.200969   20503 main.go:141] libmachine: STDERR: 
	I0729 04:41:42.201033   20503 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2 +20000M
	I0729 04:41:42.209253   20503 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:41:42.209268   20503 main.go:141] libmachine: STDERR: 
	I0729 04:41:42.209282   20503 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2
	I0729 04:41:42.209286   20503 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:41:42.209302   20503 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:42.209325   20503 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:fe:66:f3:43:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2
	I0729 04:41:42.211069   20503 main.go:141] libmachine: STDOUT: 
	I0729 04:41:42.211085   20503 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:42.211104   20503 client.go:171] duration metric: took 337.274709ms to LocalClient.Create
	I0729 04:41:44.213253   20503 start.go:128] duration metric: took 2.359898625s to createHost
	I0729 04:41:44.213344   20503 start.go:83] releasing machines lock for "embed-certs-846000", held for 2.360040791s
	W0729 04:41:44.213501   20503 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:44.227508   20503 out.go:177] * Deleting "embed-certs-846000" in qemu2 ...
	W0729 04:41:44.248892   20503 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:44.248916   20503 start.go:729] Will try again in 5 seconds ...
	I0729 04:41:49.251107   20503 start.go:360] acquireMachinesLock for embed-certs-846000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:49.251579   20503 start.go:364] duration metric: took 366.25µs to acquireMachinesLock for "embed-certs-846000"
	I0729 04:41:49.251749   20503 start.go:93] Provisioning new machine with config: &{Name:embed-certs-846000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:41:49.252047   20503 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:41:49.263747   20503 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:41:49.313302   20503 start.go:159] libmachine.API.Create for "embed-certs-846000" (driver="qemu2")
	I0729 04:41:49.313360   20503 client.go:168] LocalClient.Create starting
	I0729 04:41:49.313477   20503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:41:49.313541   20503 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:49.313560   20503 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:49.313625   20503 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:41:49.313669   20503 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:49.313682   20503 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:49.314320   20503 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:41:49.494407   20503 main.go:141] libmachine: Creating SSH key...
	I0729 04:41:49.637028   20503 main.go:141] libmachine: Creating Disk image...
	I0729 04:41:49.637035   20503 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:41:49.637265   20503 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2
	I0729 04:41:49.646945   20503 main.go:141] libmachine: STDOUT: 
	I0729 04:41:49.646964   20503 main.go:141] libmachine: STDERR: 
	I0729 04:41:49.647021   20503 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2 +20000M
	I0729 04:41:49.654928   20503 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:41:49.654947   20503 main.go:141] libmachine: STDERR: 
	I0729 04:41:49.654958   20503 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2
	I0729 04:41:49.654962   20503 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:41:49.654981   20503 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:49.655008   20503 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:35:61:31:40:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2
	I0729 04:41:49.656694   20503 main.go:141] libmachine: STDOUT: 
	I0729 04:41:49.656711   20503 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:49.656723   20503 client.go:171] duration metric: took 343.366625ms to LocalClient.Create
	I0729 04:41:51.658831   20503 start.go:128] duration metric: took 2.406815292s to createHost
	I0729 04:41:51.658897   20503 start.go:83] releasing machines lock for "embed-certs-846000", held for 2.407346708s
	W0729 04:41:51.659206   20503 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:51.675691   20503 out.go:177] 
	W0729 04:41:51.678889   20503 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:41:51.678916   20503 out.go:239] * 
	* 
	W0729 04:41:51.681026   20503 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:41:51.689764   20503 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (49.969833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-011000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-011000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (10.4120215s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-011000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-011000" primary control-plane node in "default-k8s-diff-port-011000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-011000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:41:43.733443   20523 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:41:43.733578   20523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:43.733583   20523 out.go:304] Setting ErrFile to fd 2...
	I0729 04:41:43.733585   20523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:43.733724   20523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:41:43.734818   20523 out.go:298] Setting JSON to false
	I0729 04:41:43.750871   20523 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9672,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:41:43.750932   20523 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:41:43.756242   20523 out.go:177] * [default-k8s-diff-port-011000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:41:43.763163   20523 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:41:43.763188   20523 notify.go:220] Checking for updates...
	I0729 04:41:43.771139   20523 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:41:43.774133   20523 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:41:43.777176   20523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:41:43.780090   20523 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:41:43.783177   20523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:41:43.786414   20523 config.go:182] Loaded profile config "embed-certs-846000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:41:43.786475   20523 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:41:43.786523   20523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:41:43.791144   20523 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:41:43.798178   20523 start.go:297] selected driver: qemu2
	I0729 04:41:43.798186   20523 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:41:43.798195   20523 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:41:43.800384   20523 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:41:43.804099   20523 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:41:43.807207   20523 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:41:43.807251   20523 cni.go:84] Creating CNI manager for ""
	I0729 04:41:43.807266   20523 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:41:43.807269   20523 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:41:43.807303   20523 start.go:340] cluster config:
	{Name:default-k8s-diff-port-011000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:43.810957   20523 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:43.819235   20523 out.go:177] * Starting "default-k8s-diff-port-011000" primary control-plane node in "default-k8s-diff-port-011000" cluster
	I0729 04:41:43.823158   20523 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:41:43.823172   20523 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:41:43.823181   20523 cache.go:56] Caching tarball of preloaded images
	I0729 04:41:43.823235   20523 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:41:43.823241   20523 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:41:43.823295   20523 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/default-k8s-diff-port-011000/config.json ...
	I0729 04:41:43.823306   20523 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/default-k8s-diff-port-011000/config.json: {Name:mkfb2c23c3df2c488e464e4f4d55361b6c310ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:41:43.823540   20523 start.go:360] acquireMachinesLock for default-k8s-diff-port-011000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:44.213581   20523 start.go:364] duration metric: took 390.009208ms to acquireMachinesLock for "default-k8s-diff-port-011000"
	I0729 04:41:44.213743   20523 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:41:44.213917   20523 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:41:44.219494   20523 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:41:44.267741   20523 start.go:159] libmachine.API.Create for "default-k8s-diff-port-011000" (driver="qemu2")
	I0729 04:41:44.267783   20523 client.go:168] LocalClient.Create starting
	I0729 04:41:44.267897   20523 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:41:44.267954   20523 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:44.267969   20523 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:44.268047   20523 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:41:44.268108   20523 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:44.268121   20523 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:44.268862   20523 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:41:44.508353   20523 main.go:141] libmachine: Creating SSH key...
	I0729 04:41:44.577679   20523 main.go:141] libmachine: Creating Disk image...
	I0729 04:41:44.577684   20523 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:41:44.577902   20523 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2
	I0729 04:41:44.587117   20523 main.go:141] libmachine: STDOUT: 
	I0729 04:41:44.587131   20523 main.go:141] libmachine: STDERR: 
	I0729 04:41:44.587185   20523 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2 +20000M
	I0729 04:41:44.594886   20523 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:41:44.594898   20523 main.go:141] libmachine: STDERR: 
	I0729 04:41:44.594910   20523 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2
	I0729 04:41:44.594919   20523 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:41:44.594940   20523 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:44.594971   20523 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:ba:53:4d:6d:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2
	I0729 04:41:44.596549   20523 main.go:141] libmachine: STDOUT: 
	I0729 04:41:44.596562   20523 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:44.596581   20523 client.go:171] duration metric: took 328.799792ms to LocalClient.Create
	I0729 04:41:46.598764   20523 start.go:128] duration metric: took 2.384815708s to createHost
	I0729 04:41:46.598814   20523 start.go:83] releasing machines lock for "default-k8s-diff-port-011000", held for 2.385254292s
	W0729 04:41:46.598869   20523 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:46.618065   20523 out.go:177] * Deleting "default-k8s-diff-port-011000" in qemu2 ...
	W0729 04:41:46.648544   20523 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:46.648578   20523 start.go:729] Will try again in 5 seconds ...
	I0729 04:41:51.650671   20523 start.go:360] acquireMachinesLock for default-k8s-diff-port-011000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:51.658960   20523 start.go:364] duration metric: took 8.18775ms to acquireMachinesLock for "default-k8s-diff-port-011000"
	I0729 04:41:51.659103   20523 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:41:51.659377   20523 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:41:51.668817   20523 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:41:51.718477   20523 start.go:159] libmachine.API.Create for "default-k8s-diff-port-011000" (driver="qemu2")
	I0729 04:41:51.718521   20523 client.go:168] LocalClient.Create starting
	I0729 04:41:51.718612   20523 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:41:51.718673   20523 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:51.718690   20523 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:51.718753   20523 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:41:51.718783   20523 main.go:141] libmachine: Decoding PEM data...
	I0729 04:41:51.718794   20523 main.go:141] libmachine: Parsing certificate...
	I0729 04:41:51.719328   20523 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:41:51.953559   20523 main.go:141] libmachine: Creating SSH key...
	I0729 04:41:52.058593   20523 main.go:141] libmachine: Creating Disk image...
	I0729 04:41:52.058601   20523 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:41:52.058785   20523 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2
	I0729 04:41:52.068215   20523 main.go:141] libmachine: STDOUT: 
	I0729 04:41:52.068232   20523 main.go:141] libmachine: STDERR: 
	I0729 04:41:52.068307   20523 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2 +20000M
	I0729 04:41:52.076573   20523 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:41:52.076588   20523 main.go:141] libmachine: STDERR: 
	I0729 04:41:52.076601   20523 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2
	I0729 04:41:52.076606   20523 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:41:52.076616   20523 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:52.076647   20523 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:50:a9:56:29:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2
	I0729 04:41:52.078459   20523 main.go:141] libmachine: STDOUT: 
	I0729 04:41:52.078472   20523 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:52.078484   20523 client.go:171] duration metric: took 359.967792ms to LocalClient.Create
	I0729 04:41:54.080649   20523 start.go:128] duration metric: took 2.42128225s to createHost
	I0729 04:41:54.080694   20523 start.go:83] releasing machines lock for "default-k8s-diff-port-011000", held for 2.421765208s
	W0729 04:41:54.081109   20523 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-011000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-011000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:54.086874   20523 out.go:177] 
	W0729 04:41:54.092791   20523 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:41:54.092814   20523 out.go:239] * 
	* 
	W0729 04:41:54.095721   20523 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:41:54.103788   20523 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-011000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000: exit status 7 (66.280583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-846000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-846000 create -f testdata/busybox.yaml: exit status 1 (30.255875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-846000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-846000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (32.435708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (31.775458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-846000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-846000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-846000 describe deploy/metrics-server -n kube-system: exit status 1 (30.572458ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-846000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-846000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (29.703541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-011000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-011000 create -f testdata/busybox.yaml: exit status 1 (29.1545ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-011000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-011000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000: exit status 7 (28.152334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-011000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000: exit status 7 (28.120708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-011000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-011000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-011000 describe deploy/metrics-server -n kube-system: exit status 1 (26.598708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-011000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-011000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000: exit status 7 (28.526958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.189021875s)

                                                
                                                
-- stdout --
	* [embed-certs-846000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-846000" primary control-plane node in "embed-certs-846000" cluster
	* Restarting existing qemu2 VM for "embed-certs-846000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-846000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:41:56.039525   20601 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:41:56.039653   20601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:56.039656   20601 out.go:304] Setting ErrFile to fd 2...
	I0729 04:41:56.039658   20601 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:56.039787   20601 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:41:56.040793   20601 out.go:298] Setting JSON to false
	I0729 04:41:56.056935   20601 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9685,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:41:56.057002   20601 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:41:56.062382   20601 out.go:177] * [embed-certs-846000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:41:56.069389   20601 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:41:56.069452   20601 notify.go:220] Checking for updates...
	I0729 04:41:56.077307   20601 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:41:56.081316   20601 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:41:56.084379   20601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:41:56.087483   20601 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:41:56.090276   20601 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:41:56.093601   20601 config.go:182] Loaded profile config "embed-certs-846000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:41:56.093862   20601 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:41:56.097377   20601 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:41:56.104335   20601 start.go:297] selected driver: qemu2
	I0729 04:41:56.104352   20601 start.go:901] validating driver "qemu2" against &{Name:embed-certs-846000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-846000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:56.104405   20601 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:41:56.106658   20601 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:41:56.106677   20601 cni.go:84] Creating CNI manager for ""
	I0729 04:41:56.106684   20601 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:41:56.106709   20601 start.go:340] cluster config:
	{Name:embed-certs-846000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-846000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:56.110309   20601 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:56.119305   20601 out.go:177] * Starting "embed-certs-846000" primary control-plane node in "embed-certs-846000" cluster
	I0729 04:41:56.123325   20601 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:41:56.123343   20601 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:41:56.123359   20601 cache.go:56] Caching tarball of preloaded images
	I0729 04:41:56.123407   20601 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:41:56.123413   20601 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:41:56.123468   20601 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/embed-certs-846000/config.json ...
	I0729 04:41:56.123941   20601 start.go:360] acquireMachinesLock for embed-certs-846000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:56.123968   20601 start.go:364] duration metric: took 21.833µs to acquireMachinesLock for "embed-certs-846000"
	I0729 04:41:56.123978   20601 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:41:56.123984   20601 fix.go:54] fixHost starting: 
	I0729 04:41:56.124103   20601 fix.go:112] recreateIfNeeded on embed-certs-846000: state=Stopped err=<nil>
	W0729 04:41:56.124111   20601 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:41:56.132379   20601 out.go:177] * Restarting existing qemu2 VM for "embed-certs-846000" ...
	I0729 04:41:56.136276   20601 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:56.136312   20601 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:35:61:31:40:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2
	I0729 04:41:56.138329   20601 main.go:141] libmachine: STDOUT: 
	I0729 04:41:56.138352   20601 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:56.138382   20601 fix.go:56] duration metric: took 14.398208ms for fixHost
	I0729 04:41:56.138387   20601 start.go:83] releasing machines lock for "embed-certs-846000", held for 14.414542ms
	W0729 04:41:56.138394   20601 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:41:56.138434   20601 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:56.138442   20601 start.go:729] Will try again in 5 seconds ...
	I0729 04:42:01.140575   20601 start.go:360] acquireMachinesLock for embed-certs-846000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:42:01.141114   20601 start.go:364] duration metric: took 408.458µs to acquireMachinesLock for "embed-certs-846000"
	I0729 04:42:01.141272   20601 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:42:01.141292   20601 fix.go:54] fixHost starting: 
	I0729 04:42:01.142003   20601 fix.go:112] recreateIfNeeded on embed-certs-846000: state=Stopped err=<nil>
	W0729 04:42:01.142032   20601 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:42:01.150565   20601 out.go:177] * Restarting existing qemu2 VM for "embed-certs-846000" ...
	I0729 04:42:01.154693   20601 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:42:01.154900   20601 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:35:61:31:40:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/embed-certs-846000/disk.qcow2
	I0729 04:42:01.164449   20601 main.go:141] libmachine: STDOUT: 
	I0729 04:42:01.164504   20601 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:42:01.164588   20601 fix.go:56] duration metric: took 23.298334ms for fixHost
	I0729 04:42:01.164603   20601 start.go:83] releasing machines lock for "embed-certs-846000", held for 23.464959ms
	W0729 04:42:01.164769   20601 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-846000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-846000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:42:01.172589   20601 out.go:177] 
	W0729 04:42:01.176803   20601 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:42:01.176827   20601 out.go:239] * 
	* 
	W0729 04:42:01.179329   20601 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:42:01.187715   20601 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-846000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (65.663625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-011000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-011000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.612525583s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-011000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-011000" primary control-plane node in "default-k8s-diff-port-011000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-011000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:41:57.684269   20620 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:41:57.684425   20620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:57.684429   20620 out.go:304] Setting ErrFile to fd 2...
	I0729 04:41:57.684431   20620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:41:57.684584   20620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:41:57.685602   20620 out.go:298] Setting JSON to false
	I0729 04:41:57.701659   20620 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9686,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:41:57.701731   20620 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:41:57.705668   20620 out.go:177] * [default-k8s-diff-port-011000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:41:57.712616   20620 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:41:57.712695   20620 notify.go:220] Checking for updates...
	I0729 04:41:57.720557   20620 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:41:57.723544   20620 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:41:57.726621   20620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:41:57.729644   20620 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:41:57.732565   20620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:41:57.735824   20620 config.go:182] Loaded profile config "default-k8s-diff-port-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:41:57.736084   20620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:41:57.739520   20620 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:41:57.746619   20620 start.go:297] selected driver: qemu2
	I0729 04:41:57.746627   20620 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:57.746702   20620 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:41:57.749038   20620 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:41:57.749085   20620 cni.go:84] Creating CNI manager for ""
	I0729 04:41:57.749093   20620 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:41:57.749120   20620 start.go:340] cluster config:
	{Name:default-k8s-diff-port-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-011000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:41:57.752742   20620 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:41:57.761588   20620 out.go:177] * Starting "default-k8s-diff-port-011000" primary control-plane node in "default-k8s-diff-port-011000" cluster
	I0729 04:41:57.765623   20620 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:41:57.765637   20620 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:41:57.765646   20620 cache.go:56] Caching tarball of preloaded images
	I0729 04:41:57.765695   20620 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:41:57.765700   20620 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:41:57.765752   20620 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/default-k8s-diff-port-011000/config.json ...
	I0729 04:41:57.766261   20620 start.go:360] acquireMachinesLock for default-k8s-diff-port-011000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:41:57.766289   20620 start.go:364] duration metric: took 21.958µs to acquireMachinesLock for "default-k8s-diff-port-011000"
	I0729 04:41:57.766298   20620 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:41:57.766303   20620 fix.go:54] fixHost starting: 
	I0729 04:41:57.766411   20620 fix.go:112] recreateIfNeeded on default-k8s-diff-port-011000: state=Stopped err=<nil>
	W0729 04:41:57.766419   20620 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:41:57.770593   20620 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-011000" ...
	I0729 04:41:57.777631   20620 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:41:57.777665   20620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:50:a9:56:29:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2
	I0729 04:41:57.779644   20620 main.go:141] libmachine: STDOUT: 
	I0729 04:41:57.779664   20620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:41:57.779695   20620 fix.go:56] duration metric: took 13.3915ms for fixHost
	I0729 04:41:57.779699   20620 start.go:83] releasing machines lock for "default-k8s-diff-port-011000", held for 13.406208ms
	W0729 04:41:57.779729   20620 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:41:57.779761   20620 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:41:57.779765   20620 start.go:729] Will try again in 5 seconds ...
	I0729 04:42:02.781860   20620 start.go:360] acquireMachinesLock for default-k8s-diff-port-011000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:42:04.199279   20620 start.go:364] duration metric: took 1.417303833s to acquireMachinesLock for "default-k8s-diff-port-011000"
	I0729 04:42:04.199388   20620 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:42:04.199409   20620 fix.go:54] fixHost starting: 
	I0729 04:42:04.200186   20620 fix.go:112] recreateIfNeeded on default-k8s-diff-port-011000: state=Stopped err=<nil>
	W0729 04:42:04.200219   20620 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:42:04.205832   20620 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-011000" ...
	I0729 04:42:04.219841   20620 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:42:04.220092   20620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:50:a9:56:29:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/default-k8s-diff-port-011000/disk.qcow2
	I0729 04:42:04.229826   20620 main.go:141] libmachine: STDOUT: 
	I0729 04:42:04.229885   20620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:42:04.229968   20620 fix.go:56] duration metric: took 30.558083ms for fixHost
	I0729 04:42:04.229984   20620 start.go:83] releasing machines lock for "default-k8s-diff-port-011000", held for 30.666ms
	W0729 04:42:04.230176   20620 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-011000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:42:04.237737   20620 out.go:177] 
	W0729 04:42:04.240821   20620 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:42:04.240839   20620 out.go:239] * 
	* 
	W0729 04:42:04.242918   20620 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:42:04.253777   20620 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-011000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000: exit status 7 (59.695583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-846000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (32.222833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-846000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-846000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-846000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.874667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-846000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-846000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (28.486833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-846000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (28.638541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-846000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-846000 --alsologtostderr -v=1: exit status 83 (43.007667ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-846000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-846000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:42:01.454050   20641 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:42:01.454208   20641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:01.454211   20641 out.go:304] Setting ErrFile to fd 2...
	I0729 04:42:01.454213   20641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:01.454347   20641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:42:01.454568   20641 out.go:298] Setting JSON to false
	I0729 04:42:01.454574   20641 mustload.go:65] Loading cluster: embed-certs-846000
	I0729 04:42:01.454750   20641 config.go:182] Loaded profile config "embed-certs-846000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:42:01.460024   20641 out.go:177] * The control-plane node embed-certs-846000 host is not running: state=Stopped
	I0729 04:42:01.464053   20641 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-846000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-846000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (29.422375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (28.608458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-108000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-108000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (10.147336208s)

                                                
                                                
-- stdout --
	* [newest-cni-108000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-108000" primary control-plane node in "newest-cni-108000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-108000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:42:01.770901   20658 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:42:01.771049   20658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:01.771052   20658 out.go:304] Setting ErrFile to fd 2...
	I0729 04:42:01.771054   20658 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:01.771207   20658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:42:01.772273   20658 out.go:298] Setting JSON to false
	I0729 04:42:01.788450   20658 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9690,"bootTime":1722243631,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:42:01.788521   20658 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:42:01.793009   20658 out.go:177] * [newest-cni-108000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:42:01.799940   20658 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:42:01.800006   20658 notify.go:220] Checking for updates...
	I0729 04:42:01.808025   20658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:42:01.810933   20658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:42:01.814013   20658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:42:01.817018   20658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:42:01.819968   20658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:42:01.823247   20658 config.go:182] Loaded profile config "default-k8s-diff-port-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:42:01.823308   20658 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:42:01.823352   20658 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:42:01.828008   20658 out.go:177] * Using the qemu2 driver based on user configuration
	I0729 04:42:01.835004   20658 start.go:297] selected driver: qemu2
	I0729 04:42:01.835012   20658 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:42:01.835021   20658 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:42:01.837460   20658 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 04:42:01.837493   20658 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 04:42:01.844930   20658 out.go:177] * Automatically selected the socket_vmnet network
	I0729 04:42:01.848101   20658 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 04:42:01.848148   20658 cni.go:84] Creating CNI manager for ""
	I0729 04:42:01.848156   20658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:42:01.848166   20658 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:42:01.848193   20658 start.go:340] cluster config:
	{Name:newest-cni-108000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:42:01.851982   20658 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:42:01.859978   20658 out.go:177] * Starting "newest-cni-108000" primary control-plane node in "newest-cni-108000" cluster
	I0729 04:42:01.863958   20658 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:42:01.863974   20658 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 04:42:01.863987   20658 cache.go:56] Caching tarball of preloaded images
	I0729 04:42:01.864054   20658 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:42:01.864067   20658 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 04:42:01.864129   20658 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/newest-cni-108000/config.json ...
	I0729 04:42:01.864145   20658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/newest-cni-108000/config.json: {Name:mkb902f9efad54b1ff6aebb41afff67e6fd8dafa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:42:01.864563   20658 start.go:360] acquireMachinesLock for newest-cni-108000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:42:01.864599   20658 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "newest-cni-108000"
	I0729 04:42:01.864612   20658 start.go:93] Provisioning new machine with config: &{Name:newest-cni-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:42:01.864647   20658 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:42:01.869012   20658 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:42:01.887314   20658 start.go:159] libmachine.API.Create for "newest-cni-108000" (driver="qemu2")
	I0729 04:42:01.887340   20658 client.go:168] LocalClient.Create starting
	I0729 04:42:01.887397   20658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:42:01.887429   20658 main.go:141] libmachine: Decoding PEM data...
	I0729 04:42:01.887440   20658 main.go:141] libmachine: Parsing certificate...
	I0729 04:42:01.887479   20658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:42:01.887503   20658 main.go:141] libmachine: Decoding PEM data...
	I0729 04:42:01.887510   20658 main.go:141] libmachine: Parsing certificate...
	I0729 04:42:01.887907   20658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:42:02.036627   20658 main.go:141] libmachine: Creating SSH key...
	I0729 04:42:02.177431   20658 main.go:141] libmachine: Creating Disk image...
	I0729 04:42:02.177440   20658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:42:02.177667   20658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2
	I0729 04:42:02.187216   20658 main.go:141] libmachine: STDOUT: 
	I0729 04:42:02.187259   20658 main.go:141] libmachine: STDERR: 
	I0729 04:42:02.187308   20658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2 +20000M
	I0729 04:42:02.195173   20658 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:42:02.195191   20658 main.go:141] libmachine: STDERR: 
	I0729 04:42:02.195209   20658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2
	I0729 04:42:02.195213   20658 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:42:02.195224   20658 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:42:02.195248   20658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:27:11:66:4f:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2
	I0729 04:42:02.196885   20658 main.go:141] libmachine: STDOUT: 
	I0729 04:42:02.196901   20658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:42:02.196926   20658 client.go:171] duration metric: took 309.588542ms to LocalClient.Create
	I0729 04:42:04.199047   20658 start.go:128] duration metric: took 2.334433542s to createHost
	I0729 04:42:04.199102   20658 start.go:83] releasing machines lock for "newest-cni-108000", held for 2.334547583s
	W0729 04:42:04.199172   20658 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:42:04.215724   20658 out.go:177] * Deleting "newest-cni-108000" in qemu2 ...
	W0729 04:42:04.267107   20658 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:42:04.267149   20658 start.go:729] Will try again in 5 seconds ...
	I0729 04:42:09.269224   20658 start.go:360] acquireMachinesLock for newest-cni-108000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:42:09.269708   20658 start.go:364] duration metric: took 386.625µs to acquireMachinesLock for "newest-cni-108000"
	I0729 04:42:09.269846   20658 start.go:93] Provisioning new machine with config: &{Name:newest-cni-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:42:09.270235   20658 start.go:125] createHost starting for "" (driver="qemu2")
	I0729 04:42:09.275941   20658 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 04:42:09.329826   20658 start.go:159] libmachine.API.Create for "newest-cni-108000" (driver="qemu2")
	I0729 04:42:09.329873   20658 client.go:168] LocalClient.Create starting
	I0729 04:42:09.329976   20658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/ca.pem
	I0729 04:42:09.330046   20658 main.go:141] libmachine: Decoding PEM data...
	I0729 04:42:09.330064   20658 main.go:141] libmachine: Parsing certificate...
	I0729 04:42:09.330126   20658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19341-15486/.minikube/certs/cert.pem
	I0729 04:42:09.330171   20658 main.go:141] libmachine: Decoding PEM data...
	I0729 04:42:09.330184   20658 main.go:141] libmachine: Parsing certificate...
	I0729 04:42:09.330768   20658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0729 04:42:09.490292   20658 main.go:141] libmachine: Creating SSH key...
	I0729 04:42:09.826651   20658 main.go:141] libmachine: Creating Disk image...
	I0729 04:42:09.826668   20658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0729 04:42:09.826895   20658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2.raw /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2
	I0729 04:42:09.836734   20658 main.go:141] libmachine: STDOUT: 
	I0729 04:42:09.836765   20658 main.go:141] libmachine: STDERR: 
	I0729 04:42:09.836814   20658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2 +20000M
	I0729 04:42:09.844926   20658 main.go:141] libmachine: STDOUT: Image resized.
	
	I0729 04:42:09.844940   20658 main.go:141] libmachine: STDERR: 
	I0729 04:42:09.844954   20658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2
	I0729 04:42:09.844962   20658 main.go:141] libmachine: Starting QEMU VM...
	I0729 04:42:09.844972   20658 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:42:09.845007   20658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:5f:0c:86:54:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2
	I0729 04:42:09.846681   20658 main.go:141] libmachine: STDOUT: 
	I0729 04:42:09.846695   20658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:42:09.846711   20658 client.go:171] duration metric: took 516.843709ms to LocalClient.Create
	I0729 04:42:11.848803   20658 start.go:128] duration metric: took 2.578586125s to createHost
	I0729 04:42:11.848856   20658 start.go:83] releasing machines lock for "newest-cni-108000", held for 2.579184375s
	W0729 04:42:11.849234   20658 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-108000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-108000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:42:11.861652   20658 out.go:177] 
	W0729 04:42:11.864815   20658 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:42:11.864842   20658 out.go:239] * 
	* 
	W0729 04:42:11.867329   20658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:42:11.880704   20658 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-108000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000: exit status 7 (71.075958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-108000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-011000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000: exit status 7 (32.0835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-011000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-011000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-011000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.597709ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-011000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-011000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000: exit status 7 (29.008833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-011000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000: exit status 7 (28.295916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-011000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-011000 --alsologtostderr -v=1: exit status 83 (46.179917ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-011000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-011000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:42:04.512406   20680 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:42:04.512569   20680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:04.512573   20680 out.go:304] Setting ErrFile to fd 2...
	I0729 04:42:04.512575   20680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:04.512697   20680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:42:04.512934   20680 out.go:298] Setting JSON to false
	I0729 04:42:04.512940   20680 mustload.go:65] Loading cluster: default-k8s-diff-port-011000
	I0729 04:42:04.513123   20680 config.go:182] Loaded profile config "default-k8s-diff-port-011000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:42:04.517320   20680 out.go:177] * The control-plane node default-k8s-diff-port-011000 host is not running: state=Stopped
	I0729 04:42:04.526313   20680 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-011000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-011000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000: exit status 7 (28.964166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-011000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000: exit status 7 (28.095584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-011000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-108000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-108000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.184511208s)

                                                
                                                
-- stdout --
	* [newest-cni-108000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-108000" primary control-plane node in "newest-cni-108000" cluster
	* Restarting existing qemu2 VM for "newest-cni-108000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-108000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:42:15.734837   20737 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:42:15.734962   20737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:15.734965   20737 out.go:304] Setting ErrFile to fd 2...
	I0729 04:42:15.734967   20737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:15.735106   20737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:42:15.736152   20737 out.go:298] Setting JSON to false
	I0729 04:42:15.752111   20737 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9704,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:42:15.752187   20737 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:42:15.756317   20737 out.go:177] * [newest-cni-108000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:42:15.764177   20737 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:42:15.764224   20737 notify.go:220] Checking for updates...
	I0729 04:42:15.771279   20737 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:42:15.774283   20737 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:42:15.777322   20737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:42:15.780342   20737 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:42:15.781671   20737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:42:15.784537   20737 config.go:182] Loaded profile config "newest-cni-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 04:42:15.784804   20737 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:42:15.792794   20737 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:42:15.798281   20737 start.go:297] selected driver: qemu2
	I0729 04:42:15.798287   20737 start.go:901] validating driver "qemu2" against &{Name:newest-cni-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-108000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:42:15.798343   20737 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:42:15.800901   20737 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 04:42:15.800946   20737 cni.go:84] Creating CNI manager for ""
	I0729 04:42:15.800953   20737 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:42:15.800979   20737 start.go:340] cluster config:
	{Name:newest-cni-108000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-108000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:42:15.804614   20737 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:42:15.813299   20737 out.go:177] * Starting "newest-cni-108000" primary control-plane node in "newest-cni-108000" cluster
	I0729 04:42:15.817336   20737 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:42:15.817353   20737 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 04:42:15.817364   20737 cache.go:56] Caching tarball of preloaded images
	I0729 04:42:15.817430   20737 preload.go:172] Found /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 04:42:15.817437   20737 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 04:42:15.817506   20737 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/newest-cni-108000/config.json ...
	I0729 04:42:15.818007   20737 start.go:360] acquireMachinesLock for newest-cni-108000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:42:15.818042   20737 start.go:364] duration metric: took 28.584µs to acquireMachinesLock for "newest-cni-108000"
	I0729 04:42:15.818052   20737 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:42:15.818058   20737 fix.go:54] fixHost starting: 
	I0729 04:42:15.818177   20737 fix.go:112] recreateIfNeeded on newest-cni-108000: state=Stopped err=<nil>
	W0729 04:42:15.818186   20737 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:42:15.822270   20737 out.go:177] * Restarting existing qemu2 VM for "newest-cni-108000" ...
	I0729 04:42:15.830344   20737 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:42:15.830381   20737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:5f:0c:86:54:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2
	I0729 04:42:15.832287   20737 main.go:141] libmachine: STDOUT: 
	I0729 04:42:15.832308   20737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:42:15.832336   20737 fix.go:56] duration metric: took 14.279291ms for fixHost
	I0729 04:42:15.832340   20737 start.go:83] releasing machines lock for "newest-cni-108000", held for 14.294916ms
	W0729 04:42:15.832348   20737 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:42:15.832377   20737 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:42:15.832382   20737 start.go:729] Will try again in 5 seconds ...
	I0729 04:42:20.834509   20737 start.go:360] acquireMachinesLock for newest-cni-108000: {Name:mk95254b96d513edc88dfd0f8b232fb4ca1d88a0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 04:42:20.834848   20737 start.go:364] duration metric: took 274.041µs to acquireMachinesLock for "newest-cni-108000"
	I0729 04:42:20.834976   20737 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:42:20.834997   20737 fix.go:54] fixHost starting: 
	I0729 04:42:20.835652   20737 fix.go:112] recreateIfNeeded on newest-cni-108000: state=Stopped err=<nil>
	W0729 04:42:20.835679   20737 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 04:42:20.845215   20737 out.go:177] * Restarting existing qemu2 VM for "newest-cni-108000" ...
	I0729 04:42:20.848328   20737 qemu.go:418] Using hvf for hardware acceleration
	I0729 04:42:20.848567   20737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:5f:0c:86:54:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19341-15486/.minikube/machines/newest-cni-108000/disk.qcow2
	I0729 04:42:20.857266   20737 main.go:141] libmachine: STDOUT: 
	I0729 04:42:20.857325   20737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0729 04:42:20.857392   20737 fix.go:56] duration metric: took 22.397958ms for fixHost
	I0729 04:42:20.857406   20737 start.go:83] releasing machines lock for "newest-cni-108000", held for 22.537541ms
	W0729 04:42:20.857610   20737 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-108000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-108000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0729 04:42:20.865221   20737 out.go:177] 
	W0729 04:42:20.868206   20737 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0729 04:42:20.868237   20737 out.go:239] * 
	* 
	W0729 04:42:20.870806   20737 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:42:20.878227   20737 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-108000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000: exit status 7 (72.111333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-108000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-108000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000: exit status 7 (29.814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-108000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-108000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-108000 --alsologtostderr -v=1: exit status 83 (41.526958ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-108000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-108000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:42:21.065335   20751 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:42:21.065488   20751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:21.065491   20751 out.go:304] Setting ErrFile to fd 2...
	I0729 04:42:21.065493   20751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:42:21.065637   20751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:42:21.065850   20751 out.go:298] Setting JSON to false
	I0729 04:42:21.065857   20751 mustload.go:65] Loading cluster: newest-cni-108000
	I0729 04:42:21.066042   20751 config.go:182] Loaded profile config "newest-cni-108000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0729 04:42:21.069376   20751 out.go:177] * The control-plane node newest-cni-108000 host is not running: state=Stopped
	I0729 04:42:21.073346   20751 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-108000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-108000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000: exit status 7 (28.990292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-108000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000: exit status 7 (29.998458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-108000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 7.13
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 6.51
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
44 TestHyperKitDriverInstallOrUpdate 10.17
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 7.22
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.81
64 TestFunctional/serial/CacheCmd/cache/add_local 1.04
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.22
80 TestFunctional/parallel/DryRun 0.23
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.22
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
116 TestFunctional/parallel/ProfileCmd/profile_list 0.08
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
121 TestFunctional/parallel/Version/short 0.03
128 TestFunctional/parallel/ImageCommands/Setup 1.87
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.64
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
249 TestStoppedBinaryUpgrade/Setup 0.9
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.42
267 TestNoKubernetes/serial/Stop 2.11
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
281 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
284 TestStartStop/group/old-k8s-version/serial/Stop 3.28
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
295 TestStartStop/group/no-preload/serial/Stop 3.97
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
308 TestStartStop/group/embed-certs/serial/Stop 3.9
311 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.15
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 3.55
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-753000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-753000: exit status 85 (96.951584ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |          |
	|         | -p download-only-753000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 04:16:08
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 04:16:08.791616   15975 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:16:08.791763   15975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:08.791766   15975 out.go:304] Setting ErrFile to fd 2...
	I0729 04:16:08.791769   15975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:08.791903   15975 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	W0729 04:16:08.791992   15975 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19341-15486/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19341-15486/.minikube/config/config.json: no such file or directory
	I0729 04:16:08.793308   15975 out.go:298] Setting JSON to true
	I0729 04:16:08.810544   15975 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8137,"bootTime":1722243631,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:16:08.810613   15975 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:16:08.816379   15975 out.go:97] [download-only-753000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:16:08.816566   15975 notify.go:220] Checking for updates...
	W0729 04:16:08.816621   15975 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 04:16:08.818201   15975 out.go:169] MINIKUBE_LOCATION=19341
	I0729 04:16:08.823624   15975 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:16:08.827842   15975 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:16:08.832962   15975 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:16:08.835875   15975 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	W0729 04:16:08.841784   15975 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 04:16:08.841996   15975 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:16:08.845136   15975 out.go:97] Using the qemu2 driver based on user configuration
	I0729 04:16:08.845153   15975 start.go:297] selected driver: qemu2
	I0729 04:16:08.845168   15975 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:16:08.845224   15975 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:16:08.849503   15975 out.go:169] Automatically selected the socket_vmnet network
	I0729 04:16:08.853344   15975 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 04:16:08.853448   15975 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:16:08.853517   15975 cni.go:84] Creating CNI manager for ""
	I0729 04:16:08.853534   15975 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 04:16:08.853580   15975 start.go:340] cluster config:
	{Name:download-only-753000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:16:08.857482   15975 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:16:08.861851   15975 out.go:97] Downloading VM boot image ...
	I0729 04:16:08.861867   15975 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0729 04:16:13.822716   15975 out.go:97] Starting "download-only-753000" primary control-plane node in "download-only-753000" cluster
	I0729 04:16:13.822741   15975 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:16:13.878505   15975 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:16:13.878513   15975 cache.go:56] Caching tarball of preloaded images
	I0729 04:16:13.878672   15975 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:16:13.883732   15975 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 04:16:13.883741   15975 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:16:13.959132   15975 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0729 04:16:20.113895   15975 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:16:20.114057   15975 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:16:20.810777   15975 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 04:16:20.810995   15975 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/download-only-753000/config.json ...
	I0729 04:16:20.811013   15975 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19341-15486/.minikube/profiles/download-only-753000/config.json: {Name:mk53306436022dc1bb9c5bc61fd40e745b54e730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 04:16:20.812579   15975 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 04:16:20.813060   15975 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0729 04:16:21.199695   15975 out.go:169] 
	W0729 04:16:21.203737   15975 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19341-15486/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106889a60 0x106889a60 0x106889a60 0x106889a60 0x106889a60 0x106889a60 0x106889a60] Decompressors:map[bz2:0x1400069f9f0 gz:0x1400069f9f8 tar:0x1400069f9a0 tar.bz2:0x1400069f9b0 tar.gz:0x1400069f9c0 tar.xz:0x1400069f9d0 tar.zst:0x1400069f9e0 tbz2:0x1400069f9b0 tgz:0x1400069f9c0 txz:0x1400069f9d0 tzst:0x1400069f9e0 xz:0x1400069fa00 zip:0x1400069fa10 zst:0x1400069fa08] Getters:map[file:0x140008fc6e0 http:0x140006388c0 https:0x14000638910] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0729 04:16:21.203766   15975 out_reason.go:110] 
	W0729 04:16:21.211691   15975 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:16:21.214630   15975 out.go:169] 
	
	
	* The control-plane node download-only-753000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-753000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-753000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-386000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-386000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (7.126164917s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-386000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-386000: exit status 85 (80.508125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | -p download-only-753000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| delete  | -p download-only-753000        | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| start   | -o=json --download-only        | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | -p download-only-386000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 04:16:21
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 04:16:21.631402   16001 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:16:21.631514   16001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:21.631518   16001 out.go:304] Setting ErrFile to fd 2...
	I0729 04:16:21.631520   16001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:21.631650   16001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:16:21.632770   16001 out.go:298] Setting JSON to true
	I0729 04:16:21.649807   16001 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8150,"bootTime":1722243631,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:16:21.649932   16001 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:16:21.654700   16001 out.go:97] [download-only-386000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:16:21.654836   16001 notify.go:220] Checking for updates...
	I0729 04:16:21.658574   16001 out.go:169] MINIKUBE_LOCATION=19341
	I0729 04:16:21.661665   16001 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:16:21.664705   16001 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:16:21.667558   16001 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:16:21.670650   16001 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	W0729 04:16:21.676522   16001 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 04:16:21.676648   16001 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:16:21.679563   16001 out.go:97] Using the qemu2 driver based on user configuration
	I0729 04:16:21.679571   16001 start.go:297] selected driver: qemu2
	I0729 04:16:21.679576   16001 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:16:21.679620   16001 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:16:21.682621   16001 out.go:169] Automatically selected the socket_vmnet network
	I0729 04:16:21.687789   16001 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 04:16:21.687878   16001 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:16:21.687892   16001 cni.go:84] Creating CNI manager for ""
	I0729 04:16:21.687899   16001 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:16:21.687904   16001 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:16:21.687947   16001 start.go:340] cluster config:
	{Name:download-only-386000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:16:21.691703   16001 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:16:21.694592   16001 out.go:97] Starting "download-only-386000" primary control-plane node in "download-only-386000" cluster
	I0729 04:16:21.694599   16001 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:16:21.760070   16001 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:16:21.760091   16001 cache.go:56] Caching tarball of preloaded images
	I0729 04:16:21.760953   16001 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:16:21.765333   16001 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 04:16:21.765342   16001 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:16:21.845405   16001 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0729 04:16:26.501920   16001 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:16:26.502077   16001 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-386000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-386000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-386000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (6.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-771000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-771000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (6.512805833s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (6.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-771000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-771000: exit status 85 (76.302167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | -p download-only-753000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| delete  | -p download-only-753000             | download-only-753000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| start   | -o=json --download-only             | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | -p download-only-386000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| delete  | -p download-only-386000             | download-only-386000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT | 29 Jul 24 04:16 PDT |
	| start   | -o=json --download-only             | download-only-771000 | jenkins | v1.33.1 | 29 Jul 24 04:16 PDT |                     |
	|         | -p download-only-771000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 04:16:29
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 04:16:29.052884   16027 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:16:29.053011   16027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:29.053014   16027 out.go:304] Setting ErrFile to fd 2...
	I0729 04:16:29.053016   16027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:16:29.053136   16027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:16:29.054266   16027 out.go:298] Setting JSON to true
	I0729 04:16:29.070678   16027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8158,"bootTime":1722243631,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:16:29.070741   16027 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:16:29.075794   16027 out.go:97] [download-only-771000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:16:29.075921   16027 notify.go:220] Checking for updates...
	I0729 04:16:29.079736   16027 out.go:169] MINIKUBE_LOCATION=19341
	I0729 04:16:29.083758   16027 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:16:29.086837   16027 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:16:29.089752   16027 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:16:29.092776   16027 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	W0729 04:16:29.097132   16027 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 04:16:29.097337   16027 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:16:29.099713   16027 out.go:97] Using the qemu2 driver based on user configuration
	I0729 04:16:29.099722   16027 start.go:297] selected driver: qemu2
	I0729 04:16:29.099727   16027 start.go:901] validating driver "qemu2" against <nil>
	I0729 04:16:29.099773   16027 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:16:29.102781   16027 out.go:169] Automatically selected the socket_vmnet network
	I0729 04:16:29.107996   16027 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0729 04:16:29.108094   16027 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:16:29.108133   16027 cni.go:84] Creating CNI manager for ""
	I0729 04:16:29.108142   16027 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:16:29.108153   16027 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:16:29.108188   16027 start.go:340] cluster config:
	{Name:download-only-771000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-771000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:16:29.111845   16027 iso.go:125] acquiring lock: {Name:mkd0c98a198e76211800915d75aac5ccf3108d57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:16:29.114779   16027 out.go:97] Starting "download-only-771000" primary control-plane node in "download-only-771000" cluster
	I0729 04:16:29.114786   16027 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:16:29.173498   16027 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 04:16:29.173522   16027 cache.go:56] Caching tarball of preloaded images
	I0729 04:16:29.174489   16027 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 04:16:29.177893   16027 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 04:16:29.177901   16027 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:16:29.251678   16027 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0729 04:16:33.655931   16027 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0729 04:16:33.656085   16027 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19341-15486/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-771000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-771000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-771000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-393000 --alsologtostderr --binary-mirror http://127.0.0.1:52921 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-393000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-393000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-621000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-621000: exit status 85 (56.494667ms)

                                                
                                                
-- stdout --
	* Profile "addons-621000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-621000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-621000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-621000: exit status 85 (52.503291ms)

                                                
                                                
-- stdout --
	* Profile "addons-621000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-621000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.17s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 status: exit status 7 (30.27625ms)

                                                
                                                
-- stdout --
	nospam-129000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 status: exit status 7 (29.796542ms)

                                                
                                                
-- stdout --
	nospam-129000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 status: exit status 7 (30.107959ms)

                                                
                                                
-- stdout --
	nospam-129000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 pause: exit status 83 (39.913084ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-129000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-129000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 pause: exit status 83 (39.900375ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-129000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-129000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 pause: exit status 83 (38.761209ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-129000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-129000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 unpause: exit status 83 (36.793833ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-129000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-129000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 unpause: exit status 83 (39.921167ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-129000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-129000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 unpause: exit status 83 (39.710459ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-129000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-129000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (7.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 stop: (1.943815875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 stop: (1.810258042s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-129000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-129000 stop: (3.467925375s)
--- PASS: TestErrorSpam/stop (7.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19341-15486/.minikube/files/etc/test/nested/copy/15973/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-356000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3009558552/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 cache add minikube-local-cache-test:functional-356000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 cache delete minikube-local-cache-test:functional-356000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-356000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 config get cpus: exit status 14 (30.476959ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 config get cpus: exit status 14 (36.310584ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-356000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-356000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.571959ms)

                                                
                                                
-- stdout --
	* [functional-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:18:07.827479   16521 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:18:07.827620   16521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:18:07.827624   16521 out.go:304] Setting ErrFile to fd 2...
	I0729 04:18:07.827626   16521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:18:07.827780   16521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:18:07.828735   16521 out.go:298] Setting JSON to false
	I0729 04:18:07.844729   16521 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8256,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:18:07.844803   16521 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:18:07.848811   16521 out.go:177] * [functional-356000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0729 04:18:07.855751   16521 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:18:07.855838   16521 notify.go:220] Checking for updates...
	I0729 04:18:07.863742   16521 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:18:07.867744   16521 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:18:07.870709   16521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:18:07.873712   16521 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:18:07.876756   16521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:18:07.878419   16521 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:18:07.878664   16521 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:18:07.882674   16521 out.go:177] * Using the qemu2 driver based on existing profile
	I0729 04:18:07.889601   16521 start.go:297] selected driver: qemu2
	I0729 04:18:07.889609   16521 start.go:901] validating driver "qemu2" against &{Name:functional-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:18:07.889666   16521 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:18:07.895705   16521 out.go:177] 
	W0729 04:18:07.899643   16521 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 04:18:07.903662   16521 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-356000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-356000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-356000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.900875ms)

                                                
                                                
-- stdout --
	* [functional-356000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:18:07.706310   16517 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:18:07.706423   16517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:18:07.706426   16517 out.go:304] Setting ErrFile to fd 2...
	I0729 04:18:07.706429   16517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:18:07.706563   16517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19341-15486/.minikube/bin
	I0729 04:18:07.707985   16517 out.go:298] Setting JSON to false
	I0729 04:18:07.724735   16517 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8256,"bootTime":1722243631,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0729 04:18:07.724817   16517 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:18:07.729837   16517 out.go:177] * [functional-356000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0729 04:18:07.737767   16517 out.go:177]   - MINIKUBE_LOCATION=19341
	I0729 04:18:07.737846   16517 notify.go:220] Checking for updates...
	I0729 04:18:07.745659   16517 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	I0729 04:18:07.749684   16517 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0729 04:18:07.752732   16517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:18:07.755730   16517 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	I0729 04:18:07.758723   16517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:18:07.761948   16517 config.go:182] Loaded profile config "functional-356000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:18:07.762250   16517 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:18:07.766771   16517 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0729 04:18:07.773672   16517 start.go:297] selected driver: qemu2
	I0729 04:18:07.773678   16517 start.go:901] validating driver "qemu2" against &{Name:functional-356000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-356000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:18:07.773751   16517 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:18:07.780690   16517 out.go:177] 
	W0729 04:18:07.784695   16517 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 04:18:07.787731   16517 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-356000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "46.907625ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "32.828292ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "45.942625ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.520542ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.833789458s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-356000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image rm docker.io/kicbase/echo-server:functional-356000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-356000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 image save --daemon docker.io/kicbase/echo-server:functional-356000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-356000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012537958s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-356000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-356000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-356000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-356000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.64s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-312000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-312000 --output=json --user=testUser: (3.63551375s)
--- PASS: TestJSONOutput/stop/Command (3.64s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-301000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-301000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.240042ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e163d9ba-c1c0-448f-98d6-0ec46c4946aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-301000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d63eb418-d105-4463-9939-c6a3c986c4fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19341"}}
	{"specversion":"1.0","id":"d105dac1-4738-4775-9c34-e2d913173185","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig"}}
	{"specversion":"1.0","id":"60197b55-4e3b-4584-9eac-1c33f526b32c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"35fc68ee-f42f-41aa-80ba-2d2c531ab875","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b9eb2f82-7853-4db2-95cf-6e07188e5089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube"}}
	{"specversion":"1.0","id":"1ca9d4cb-0eb3-4455-af13-037f988031e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cd22be9e-65f3-43ea-8f69-07e993c933e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-301000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-301000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-257000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-257000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.582917ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-257000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19341
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19341-15486/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19341-15486/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-257000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-257000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.668791ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-257000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-257000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.637070417s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.778563416s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-257000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-257000: (2.113184042s)
--- PASS: TestNoKubernetes/serial/Stop (2.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-257000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-257000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.571792ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-257000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-257000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-514000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-623000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-623000 --alsologtostderr -v=3: (3.284304834s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-623000 -n old-k8s-version-623000: exit status 7 (53.539583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-623000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-265000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-265000 --alsologtostderr -v=3: (3.973371667s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-265000 -n no-preload-265000: exit status 7 (49.149916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-265000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-846000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-846000 --alsologtostderr -v=3: (3.901286083s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-011000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-011000 --alsologtostderr -v=3: (3.152862875s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-846000 -n embed-certs-846000: exit status 7 (54.141292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-846000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-011000 -n default-k8s-diff-port-011000: exit status 7 (55.026875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-011000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-108000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-108000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-108000 --alsologtostderr -v=3: (3.553928667s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-108000 -n newest-cni-108000: exit status 7 (60.6ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-108000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-356000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3381076398/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722251851906333000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3381076398/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722251851906333000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3381076398/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722251851906333000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3381076398/001/test-1722251851906333000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (57.561083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.399583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.22375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.900125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.511042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.1665ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.525667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "sudo umount -f /mount-9p": exit status 83 (49.573875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-356000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-356000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3381076398/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-356000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3290988844/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (60.349292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.642709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.374667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.645375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.861125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.941292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.795917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "sudo umount -f /mount-9p": exit status 83 (46.633708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-356000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-356000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3290988844/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-356000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup979040744/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-356000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup979040744/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-356000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup979040744/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1: exit status 83 (80.60325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1: exit status 83 (84.362792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1: exit status 83 (82.943541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1: exit status 83 (85.576666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1: exit status 83 (85.478958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1: exit status 83 (85.396ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-356000 ssh "findmnt -T" /mount1: exit status 83 (83.433083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-356000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-356000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-356000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup979040744/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-356000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup979040744/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-356000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup979040744/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.24s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-159000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-159000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-159000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-159000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-159000"

                                                
                                                
----------------------- debugLogs end: cilium-159000 [took: 2.280393833s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-159000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-159000
--- SKIP: TestNetworkPlugins/group/cilium (2.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-218000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-218000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard